Test Report: KVM_Linux_crio 19734

                    
                      795b96072c2ea51545c2bdfc984dcdf8fe273799:2024-09-30:36435
                    
                

Test fail (17/202)

x
+
TestAddons/Setup (2400.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-967811 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-967811 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: signal: killed (39m59.957339687s)

                                                
                                                
-- stdout --
	* [addons-967811] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-967811" primary control-plane node in "addons-967811" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	  - Using image docker.io/registry:2.8.3
	  - Using image docker.io/busybox:stable
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-967811 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-967811 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:20:40.079545   11632 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:40.079796   11632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:40.079806   11632 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:40.079810   11632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:40.080045   11632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 10:20:40.080749   11632 out.go:352] Setting JSON to false
	I0930 10:20:40.081603   11632 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":187,"bootTime":1727691453,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:40.081721   11632 start.go:139] virtualization: kvm guest
	I0930 10:20:40.083856   11632 out.go:177] * [addons-967811] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 10:20:40.085416   11632 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:20:40.085425   11632 notify.go:220] Checking for updates...
	I0930 10:20:40.088075   11632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:40.089731   11632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 10:20:40.091103   11632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 10:20:40.092587   11632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 10:20:40.093980   11632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:20:40.095523   11632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:40.128709   11632 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 10:20:40.129881   11632 start.go:297] selected driver: kvm2
	I0930 10:20:40.129895   11632 start.go:901] validating driver "kvm2" against <nil>
	I0930 10:20:40.129906   11632 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:20:40.130610   11632 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:20:40.130697   11632 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 10:20:40.146270   11632 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 10:20:40.146318   11632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:40.146618   11632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:20:40.146653   11632 cni.go:84] Creating CNI manager for ""
	I0930 10:20:40.146703   11632 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 10:20:40.146716   11632 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 10:20:40.146781   11632 start.go:340] cluster config:
	{Name:addons-967811 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-967811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:20:40.146911   11632 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:20:40.149523   11632 out.go:177] * Starting "addons-967811" primary control-plane node in "addons-967811" cluster
	I0930 10:20:40.150804   11632 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:20:40.150837   11632 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 10:20:40.150845   11632 cache.go:56] Caching tarball of preloaded images
	I0930 10:20:40.150920   11632 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 10:20:40.150930   11632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 10:20:40.151276   11632 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/config.json ...
	I0930 10:20:40.151300   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/config.json: {Name:mk9a5358e739b09678b40da22a30d5cbdefd7eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:20:40.151443   11632 start.go:360] acquireMachinesLock for addons-967811: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 10:20:40.151490   11632 start.go:364] duration metric: took 34.78µs to acquireMachinesLock for "addons-967811"
	I0930 10:20:40.151506   11632 start.go:93] Provisioning new machine with config: &{Name:addons-967811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-967811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:20:40.151567   11632 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 10:20:40.153311   11632 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0930 10:20:40.153456   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:20:40.153498   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:20:40.168267   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32777
	I0930 10:20:40.168696   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:20:40.169221   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:20:40.169257   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:20:40.169595   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:20:40.169776   11632 main.go:141] libmachine: (addons-967811) Calling .GetMachineName
	I0930 10:20:40.169908   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:20:40.170095   11632 start.go:159] libmachine.API.Create for "addons-967811" (driver="kvm2")
	I0930 10:20:40.170123   11632 client.go:168] LocalClient.Create starting
	I0930 10:20:40.170189   11632 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 10:20:40.409110   11632 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 10:20:40.592683   11632 main.go:141] libmachine: Running pre-create checks...
	I0930 10:20:40.592709   11632 main.go:141] libmachine: (addons-967811) Calling .PreCreateCheck
	I0930 10:20:40.593236   11632 main.go:141] libmachine: (addons-967811) Calling .GetConfigRaw
	I0930 10:20:40.593692   11632 main.go:141] libmachine: Creating machine...
	I0930 10:20:40.593707   11632 main.go:141] libmachine: (addons-967811) Calling .Create
	I0930 10:20:40.593828   11632 main.go:141] libmachine: (addons-967811) Creating KVM machine...
	I0930 10:20:40.595132   11632 main.go:141] libmachine: (addons-967811) DBG | found existing default KVM network
	I0930 10:20:40.595868   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:40.595710   11654 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I0930 10:20:40.595893   11632 main.go:141] libmachine: (addons-967811) DBG | created network xml: 
	I0930 10:20:40.595906   11632 main.go:141] libmachine: (addons-967811) DBG | <network>
	I0930 10:20:40.595915   11632 main.go:141] libmachine: (addons-967811) DBG |   <name>mk-addons-967811</name>
	I0930 10:20:40.595925   11632 main.go:141] libmachine: (addons-967811) DBG |   <dns enable='no'/>
	I0930 10:20:40.595932   11632 main.go:141] libmachine: (addons-967811) DBG |   
	I0930 10:20:40.595943   11632 main.go:141] libmachine: (addons-967811) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 10:20:40.595952   11632 main.go:141] libmachine: (addons-967811) DBG |     <dhcp>
	I0930 10:20:40.595961   11632 main.go:141] libmachine: (addons-967811) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 10:20:40.595970   11632 main.go:141] libmachine: (addons-967811) DBG |     </dhcp>
	I0930 10:20:40.595977   11632 main.go:141] libmachine: (addons-967811) DBG |   </ip>
	I0930 10:20:40.595988   11632 main.go:141] libmachine: (addons-967811) DBG |   
	I0930 10:20:40.595996   11632 main.go:141] libmachine: (addons-967811) DBG | </network>
	I0930 10:20:40.596003   11632 main.go:141] libmachine: (addons-967811) DBG | 
	I0930 10:20:40.601752   11632 main.go:141] libmachine: (addons-967811) DBG | trying to create private KVM network mk-addons-967811 192.168.39.0/24...
	I0930 10:20:40.670542   11632 main.go:141] libmachine: (addons-967811) DBG | private KVM network mk-addons-967811 192.168.39.0/24 created
	I0930 10:20:40.670575   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:40.670483   11654 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 10:20:40.670587   11632 main.go:141] libmachine: (addons-967811) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811 ...
	I0930 10:20:40.670604   11632 main.go:141] libmachine: (addons-967811) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 10:20:40.670620   11632 main.go:141] libmachine: (addons-967811) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 10:20:40.925931   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:40.925806   11654 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa...
	I0930 10:20:41.015559   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:41.015412   11654 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/addons-967811.rawdisk...
	I0930 10:20:41.015582   11632 main.go:141] libmachine: (addons-967811) DBG | Writing magic tar header
	I0930 10:20:41.015591   11632 main.go:141] libmachine: (addons-967811) DBG | Writing SSH key tar header
	I0930 10:20:41.015598   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:41.015532   11654 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811 ...
	I0930 10:20:41.015610   11632 main.go:141] libmachine: (addons-967811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811
	I0930 10:20:41.015623   11632 main.go:141] libmachine: (addons-967811) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811 (perms=drwx------)
	I0930 10:20:41.015662   11632 main.go:141] libmachine: (addons-967811) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 10:20:41.015673   11632 main.go:141] libmachine: (addons-967811) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 10:20:41.015699   11632 main.go:141] libmachine: (addons-967811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 10:20:41.015721   11632 main.go:141] libmachine: (addons-967811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 10:20:41.015732   11632 main.go:141] libmachine: (addons-967811) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 10:20:41.015747   11632 main.go:141] libmachine: (addons-967811) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 10:20:41.015758   11632 main.go:141] libmachine: (addons-967811) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 10:20:41.015773   11632 main.go:141] libmachine: (addons-967811) Creating domain...
	I0930 10:20:41.015783   11632 main.go:141] libmachine: (addons-967811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 10:20:41.015788   11632 main.go:141] libmachine: (addons-967811) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 10:20:41.015838   11632 main.go:141] libmachine: (addons-967811) DBG | Checking permissions on dir: /home/jenkins
	I0930 10:20:41.015870   11632 main.go:141] libmachine: (addons-967811) DBG | Checking permissions on dir: /home
	I0930 10:20:41.015892   11632 main.go:141] libmachine: (addons-967811) DBG | Skipping /home - not owner
	I0930 10:20:41.016895   11632 main.go:141] libmachine: (addons-967811) define libvirt domain using xml: 
	I0930 10:20:41.016926   11632 main.go:141] libmachine: (addons-967811) <domain type='kvm'>
	I0930 10:20:41.016947   11632 main.go:141] libmachine: (addons-967811)   <name>addons-967811</name>
	I0930 10:20:41.016963   11632 main.go:141] libmachine: (addons-967811)   <memory unit='MiB'>4000</memory>
	I0930 10:20:41.016972   11632 main.go:141] libmachine: (addons-967811)   <vcpu>2</vcpu>
	I0930 10:20:41.016977   11632 main.go:141] libmachine: (addons-967811)   <features>
	I0930 10:20:41.016983   11632 main.go:141] libmachine: (addons-967811)     <acpi/>
	I0930 10:20:41.016988   11632 main.go:141] libmachine: (addons-967811)     <apic/>
	I0930 10:20:41.016995   11632 main.go:141] libmachine: (addons-967811)     <pae/>
	I0930 10:20:41.017000   11632 main.go:141] libmachine: (addons-967811)     
	I0930 10:20:41.017007   11632 main.go:141] libmachine: (addons-967811)   </features>
	I0930 10:20:41.017019   11632 main.go:141] libmachine: (addons-967811)   <cpu mode='host-passthrough'>
	I0930 10:20:41.017032   11632 main.go:141] libmachine: (addons-967811)   
	I0930 10:20:41.017046   11632 main.go:141] libmachine: (addons-967811)   </cpu>
	I0930 10:20:41.017067   11632 main.go:141] libmachine: (addons-967811)   <os>
	I0930 10:20:41.017081   11632 main.go:141] libmachine: (addons-967811)     <type>hvm</type>
	I0930 10:20:41.017093   11632 main.go:141] libmachine: (addons-967811)     <boot dev='cdrom'/>
	I0930 10:20:41.017103   11632 main.go:141] libmachine: (addons-967811)     <boot dev='hd'/>
	I0930 10:20:41.017116   11632 main.go:141] libmachine: (addons-967811)     <bootmenu enable='no'/>
	I0930 10:20:41.017125   11632 main.go:141] libmachine: (addons-967811)   </os>
	I0930 10:20:41.017134   11632 main.go:141] libmachine: (addons-967811)   <devices>
	I0930 10:20:41.017145   11632 main.go:141] libmachine: (addons-967811)     <disk type='file' device='cdrom'>
	I0930 10:20:41.017163   11632 main.go:141] libmachine: (addons-967811)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/boot2docker.iso'/>
	I0930 10:20:41.017178   11632 main.go:141] libmachine: (addons-967811)       <target dev='hdc' bus='scsi'/>
	I0930 10:20:41.017188   11632 main.go:141] libmachine: (addons-967811)       <readonly/>
	I0930 10:20:41.017198   11632 main.go:141] libmachine: (addons-967811)     </disk>
	I0930 10:20:41.017209   11632 main.go:141] libmachine: (addons-967811)     <disk type='file' device='disk'>
	I0930 10:20:41.017222   11632 main.go:141] libmachine: (addons-967811)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 10:20:41.017236   11632 main.go:141] libmachine: (addons-967811)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/addons-967811.rawdisk'/>
	I0930 10:20:41.017252   11632 main.go:141] libmachine: (addons-967811)       <target dev='hda' bus='virtio'/>
	I0930 10:20:41.017268   11632 main.go:141] libmachine: (addons-967811)     </disk>
	I0930 10:20:41.017278   11632 main.go:141] libmachine: (addons-967811)     <interface type='network'>
	I0930 10:20:41.017289   11632 main.go:141] libmachine: (addons-967811)       <source network='mk-addons-967811'/>
	I0930 10:20:41.017303   11632 main.go:141] libmachine: (addons-967811)       <model type='virtio'/>
	I0930 10:20:41.017315   11632 main.go:141] libmachine: (addons-967811)     </interface>
	I0930 10:20:41.017327   11632 main.go:141] libmachine: (addons-967811)     <interface type='network'>
	I0930 10:20:41.017339   11632 main.go:141] libmachine: (addons-967811)       <source network='default'/>
	I0930 10:20:41.017353   11632 main.go:141] libmachine: (addons-967811)       <model type='virtio'/>
	I0930 10:20:41.017369   11632 main.go:141] libmachine: (addons-967811)     </interface>
	I0930 10:20:41.017382   11632 main.go:141] libmachine: (addons-967811)     <serial type='pty'>
	I0930 10:20:41.017392   11632 main.go:141] libmachine: (addons-967811)       <target port='0'/>
	I0930 10:20:41.017405   11632 main.go:141] libmachine: (addons-967811)     </serial>
	I0930 10:20:41.017418   11632 main.go:141] libmachine: (addons-967811)     <console type='pty'>
	I0930 10:20:41.017432   11632 main.go:141] libmachine: (addons-967811)       <target type='serial' port='0'/>
	I0930 10:20:41.017449   11632 main.go:141] libmachine: (addons-967811)     </console>
	I0930 10:20:41.017463   11632 main.go:141] libmachine: (addons-967811)     <rng model='virtio'>
	I0930 10:20:41.017477   11632 main.go:141] libmachine: (addons-967811)       <backend model='random'>/dev/random</backend>
	I0930 10:20:41.017489   11632 main.go:141] libmachine: (addons-967811)     </rng>
	I0930 10:20:41.017501   11632 main.go:141] libmachine: (addons-967811)     
	I0930 10:20:41.017525   11632 main.go:141] libmachine: (addons-967811)     
	I0930 10:20:41.017540   11632 main.go:141] libmachine: (addons-967811)   </devices>
	I0930 10:20:41.017547   11632 main.go:141] libmachine: (addons-967811) </domain>
	I0930 10:20:41.017556   11632 main.go:141] libmachine: (addons-967811) 
	I0930 10:20:41.024148   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:20:5c:f7 in network default
	I0930 10:20:41.024786   11632 main.go:141] libmachine: (addons-967811) Ensuring networks are active...
	I0930 10:20:41.024814   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:41.025647   11632 main.go:141] libmachine: (addons-967811) Ensuring network default is active
	I0930 10:20:41.026007   11632 main.go:141] libmachine: (addons-967811) Ensuring network mk-addons-967811 is active
	I0930 10:20:41.026614   11632 main.go:141] libmachine: (addons-967811) Getting domain xml...
	I0930 10:20:41.027304   11632 main.go:141] libmachine: (addons-967811) Creating domain...
	I0930 10:20:42.439766   11632 main.go:141] libmachine: (addons-967811) Waiting to get IP...
	I0930 10:20:42.440533   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:42.440913   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:42.440962   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:42.440909   11654 retry.go:31] will retry after 294.78404ms: waiting for machine to come up
	I0930 10:20:42.737732   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:42.738221   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:42.738256   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:42.738186   11654 retry.go:31] will retry after 320.437189ms: waiting for machine to come up
	I0930 10:20:43.060831   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:43.061356   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:43.061392   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:43.061308   11654 retry.go:31] will retry after 340.678326ms: waiting for machine to come up
	I0930 10:20:43.403927   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:43.404334   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:43.404359   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:43.404277   11654 retry.go:31] will retry after 556.110406ms: waiting for machine to come up
	I0930 10:20:43.961937   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:43.962360   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:43.962387   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:43.962314   11654 retry.go:31] will retry after 530.221466ms: waiting for machine to come up
	I0930 10:20:44.494047   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:44.494511   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:44.494539   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:44.494460   11654 retry.go:31] will retry after 795.399219ms: waiting for machine to come up
	I0930 10:20:45.291946   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:45.292370   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:45.292394   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:45.292319   11654 retry.go:31] will retry after 1.087630156s: waiting for machine to come up
	I0930 10:20:46.381493   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:46.381910   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:46.381945   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:46.381844   11654 retry.go:31] will retry after 902.510743ms: waiting for machine to come up
	I0930 10:20:47.285917   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:47.286402   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:47.286430   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:47.286353   11654 retry.go:31] will retry after 1.46739997s: waiting for machine to come up
	I0930 10:20:48.755934   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:48.756259   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:48.756316   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:48.756247   11654 retry.go:31] will retry after 1.879822988s: waiting for machine to come up
	I0930 10:20:50.637407   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:50.637831   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:50.637856   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:50.637784   11654 retry.go:31] will retry after 2.514951556s: waiting for machine to come up
	I0930 10:20:53.155581   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:53.156050   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:53.156075   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:53.156003   11654 retry.go:31] will retry after 3.153270284s: waiting for machine to come up
	I0930 10:20:56.310735   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:20:56.311132   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:20:56.311156   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:20:56.311098   11654 retry.go:31] will retry after 3.721986913s: waiting for machine to come up
	I0930 10:21:00.036922   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:00.037245   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find current IP address of domain addons-967811 in network mk-addons-967811
	I0930 10:21:00.037262   11632 main.go:141] libmachine: (addons-967811) DBG | I0930 10:21:00.037235   11654 retry.go:31] will retry after 4.098436748s: waiting for machine to come up
	I0930 10:21:04.136905   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:04.137303   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has current primary IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:04.137332   11632 main.go:141] libmachine: (addons-967811) Found IP for machine: 192.168.39.187
	I0930 10:21:04.137347   11632 main.go:141] libmachine: (addons-967811) Reserving static IP address...
	I0930 10:21:04.137725   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find host DHCP lease matching {name: "addons-967811", mac: "52:54:00:88:3e:c1", ip: "192.168.39.187"} in network mk-addons-967811
	I0930 10:21:04.211300   11632 main.go:141] libmachine: (addons-967811) DBG | Getting to WaitForSSH function...
	I0930 10:21:04.211327   11632 main.go:141] libmachine: (addons-967811) Reserved static IP address: 192.168.39.187
	I0930 10:21:04.211341   11632 main.go:141] libmachine: (addons-967811) Waiting for SSH to be available...
	I0930 10:21:04.213679   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:04.213967   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811
	I0930 10:21:04.213994   11632 main.go:141] libmachine: (addons-967811) DBG | unable to find defined IP address of network mk-addons-967811 interface with MAC address 52:54:00:88:3e:c1
	I0930 10:21:04.214155   11632 main.go:141] libmachine: (addons-967811) DBG | Using SSH client type: external
	I0930 10:21:04.214184   11632 main.go:141] libmachine: (addons-967811) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa (-rw-------)
	I0930 10:21:04.214215   11632 main.go:141] libmachine: (addons-967811) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 10:21:04.214230   11632 main.go:141] libmachine: (addons-967811) DBG | About to run SSH command:
	I0930 10:21:04.214243   11632 main.go:141] libmachine: (addons-967811) DBG | exit 0
	I0930 10:21:04.225877   11632 main.go:141] libmachine: (addons-967811) DBG | SSH cmd err, output: exit status 255: 
	I0930 10:21:04.225904   11632 main.go:141] libmachine: (addons-967811) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 10:21:04.225911   11632 main.go:141] libmachine: (addons-967811) DBG | command : exit 0
	I0930 10:21:04.225915   11632 main.go:141] libmachine: (addons-967811) DBG | err     : exit status 255
	I0930 10:21:04.225962   11632 main.go:141] libmachine: (addons-967811) DBG | output  : 
	I0930 10:21:07.227608   11632 main.go:141] libmachine: (addons-967811) DBG | Getting to WaitForSSH function...
	I0930 10:21:07.230047   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.230479   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:07.230504   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.230598   11632 main.go:141] libmachine: (addons-967811) DBG | Using SSH client type: external
	I0930 10:21:07.230625   11632 main.go:141] libmachine: (addons-967811) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa (-rw-------)
	I0930 10:21:07.230664   11632 main.go:141] libmachine: (addons-967811) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 10:21:07.230678   11632 main.go:141] libmachine: (addons-967811) DBG | About to run SSH command:
	I0930 10:21:07.230688   11632 main.go:141] libmachine: (addons-967811) DBG | exit 0
	I0930 10:21:07.357696   11632 main.go:141] libmachine: (addons-967811) DBG | SSH cmd err, output: <nil>: 
	I0930 10:21:07.357962   11632 main.go:141] libmachine: (addons-967811) KVM machine creation complete!
	I0930 10:21:07.358265   11632 main.go:141] libmachine: (addons-967811) Calling .GetConfigRaw
	I0930 10:21:07.417262   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:07.417597   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:07.417827   11632 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 10:21:07.417846   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:07.419212   11632 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 10:21:07.419226   11632 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 10:21:07.419233   11632 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 10:21:07.419246   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:07.421451   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.421842   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:07.421867   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.421963   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:07.422137   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.422321   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.422443   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:07.422594   11632 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:07.422814   11632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0930 10:21:07.422830   11632 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 10:21:07.537189   11632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:21:07.537218   11632 main.go:141] libmachine: Detecting the provisioner...
	I0930 10:21:07.537227   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:07.540254   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.540658   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:07.540679   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.540856   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:07.541070   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.541227   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.541525   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:07.541694   11632 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:07.541857   11632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0930 10:21:07.541868   11632 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 10:21:07.654732   11632 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 10:21:07.654784   11632 main.go:141] libmachine: found compatible host: buildroot
	I0930 10:21:07.654795   11632 main.go:141] libmachine: Provisioning with buildroot...
	I0930 10:21:07.654807   11632 main.go:141] libmachine: (addons-967811) Calling .GetMachineName
	I0930 10:21:07.655076   11632 buildroot.go:166] provisioning hostname "addons-967811"
	I0930 10:21:07.655109   11632 main.go:141] libmachine: (addons-967811) Calling .GetMachineName
	I0930 10:21:07.655323   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:07.658038   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.658503   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:07.658527   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.658829   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:07.659029   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.659186   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.659336   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:07.659518   11632 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:07.659691   11632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0930 10:21:07.659702   11632 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-967811 && echo "addons-967811" | sudo tee /etc/hostname
	I0930 10:21:07.788328   11632 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-967811
	
	I0930 10:21:07.788362   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:07.791060   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.791341   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:07.791368   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.791522   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:07.791711   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.791884   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:07.792004   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:07.792160   11632 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:07.792362   11632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0930 10:21:07.792386   11632 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-967811' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-967811/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-967811' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 10:21:07.914571   11632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:21:07.914595   11632 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 10:21:07.914634   11632 buildroot.go:174] setting up certificates
	I0930 10:21:07.914645   11632 provision.go:84] configureAuth start
	I0930 10:21:07.914656   11632 main.go:141] libmachine: (addons-967811) Calling .GetMachineName
	I0930 10:21:07.914928   11632 main.go:141] libmachine: (addons-967811) Calling .GetIP
	I0930 10:21:07.917456   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.917859   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:07.917889   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.918074   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:07.920073   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.920355   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:07.920373   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:07.920525   11632 provision.go:143] copyHostCerts
	I0930 10:21:07.920608   11632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 10:21:07.920737   11632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 10:21:07.920810   11632 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 10:21:07.920873   11632 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.addons-967811 san=[127.0.0.1 192.168.39.187 addons-967811 localhost minikube]
	I0930 10:21:08.381898   11632 provision.go:177] copyRemoteCerts
	I0930 10:21:08.381956   11632 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 10:21:08.381979   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:08.384594   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.384865   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:08.384890   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.385046   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:08.385250   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:08.385409   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:08.385527   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:08.472644   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 10:21:08.501395   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 10:21:08.526816   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 10:21:08.551880   11632 provision.go:87] duration metric: took 637.220629ms to configureAuth
	I0930 10:21:08.551907   11632 buildroot.go:189] setting minikube options for container-runtime
	I0930 10:21:08.552081   11632 config.go:182] Loaded profile config "addons-967811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:21:08.552164   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:08.554843   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.555261   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:08.555293   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.555476   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:08.555676   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:08.555851   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:08.555969   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:08.556094   11632 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:08.556293   11632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0930 10:21:08.556310   11632 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 10:21:08.794660   11632 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 10:21:08.794693   11632 main.go:141] libmachine: Checking connection to Docker...
	I0930 10:21:08.794711   11632 main.go:141] libmachine: (addons-967811) Calling .GetURL
	I0930 10:21:08.795973   11632 main.go:141] libmachine: (addons-967811) DBG | Using libvirt version 6000000
	I0930 10:21:08.797813   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.798078   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:08.798097   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.798295   11632 main.go:141] libmachine: Docker is up and running!
	I0930 10:21:08.798307   11632 main.go:141] libmachine: Reticulating splines...
	I0930 10:21:08.798314   11632 client.go:171] duration metric: took 28.628181974s to LocalClient.Create
	I0930 10:21:08.798334   11632 start.go:167] duration metric: took 28.628241015s to libmachine.API.Create "addons-967811"
	I0930 10:21:08.798344   11632 start.go:293] postStartSetup for "addons-967811" (driver="kvm2")
	I0930 10:21:08.798353   11632 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:21:08.798371   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:08.798604   11632 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:21:08.798624   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:08.800730   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.801003   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:08.801029   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.801123   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:08.801289   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:08.801431   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:08.801553   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:08.888436   11632 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 10:21:08.892990   11632 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 10:21:08.893018   11632 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 10:21:08.893104   11632 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 10:21:08.893129   11632 start.go:296] duration metric: took 94.780234ms for postStartSetup
	I0930 10:21:08.893160   11632 main.go:141] libmachine: (addons-967811) Calling .GetConfigRaw
	I0930 10:21:08.893750   11632 main.go:141] libmachine: (addons-967811) Calling .GetIP
	I0930 10:21:08.896395   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.896732   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:08.896764   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.896980   11632 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/config.json ...
	I0930 10:21:08.897226   11632 start.go:128] duration metric: took 28.745647105s to createHost
	I0930 10:21:08.897260   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:08.899269   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.899531   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:08.899558   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:08.899725   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:08.899909   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:08.900049   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:08.900178   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:08.900312   11632 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:08.900494   11632 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0930 10:21:08.900509   11632 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 10:21:09.018595   11632 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727691668.998338562
	
	I0930 10:21:09.018624   11632 fix.go:216] guest clock: 1727691668.998338562
	I0930 10:21:09.018646   11632 fix.go:229] Guest: 2024-09-30 10:21:08.998338562 +0000 UTC Remote: 2024-09-30 10:21:08.89724295 +0000 UTC m=+28.852948906 (delta=101.095612ms)
	I0930 10:21:09.018675   11632 fix.go:200] guest clock delta is within tolerance: 101.095612ms
	I0930 10:21:09.018681   11632 start.go:83] releasing machines lock for "addons-967811", held for 28.867181712s
	I0930 10:21:09.018700   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:09.018907   11632 main.go:141] libmachine: (addons-967811) Calling .GetIP
	I0930 10:21:09.021306   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:09.021561   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:09.021586   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:09.021810   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:09.022272   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:09.022430   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:09.022526   11632 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 10:21:09.022572   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:09.022614   11632 ssh_runner.go:195] Run: cat /version.json
	I0930 10:21:09.022644   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:09.025215   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:09.025418   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:09.025538   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:09.025568   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:09.025647   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:09.025921   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:09.025951   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:09.025956   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:09.026062   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:09.026143   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:09.026207   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:09.026262   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:09.026304   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:09.026430   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:09.106997   11632 ssh_runner.go:195] Run: systemctl --version
	I0930 10:21:09.140330   11632 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 10:21:09.299874   11632 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 10:21:09.306072   11632 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 10:21:09.306133   11632 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:21:09.322956   11632 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 10:21:09.322983   11632 start.go:495] detecting cgroup driver to use...
	I0930 10:21:09.323040   11632 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 10:21:09.339841   11632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 10:21:09.354899   11632 docker.go:217] disabling cri-docker service (if available) ...
	I0930 10:21:09.354955   11632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 10:21:09.370161   11632 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 10:21:09.384725   11632 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 10:21:09.500751   11632 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 10:21:09.663360   11632 docker.go:233] disabling docker service ...
	I0930 10:21:09.663419   11632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 10:21:09.678384   11632 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 10:21:09.691938   11632 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 10:21:09.811109   11632 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 10:21:09.931948   11632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 10:21:09.946254   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:21:09.966290   11632 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 10:21:09.966359   11632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:21:09.977323   11632 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 10:21:09.977392   11632 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:21:09.988743   11632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:21:09.999882   11632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:21:10.011528   11632 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:21:10.023297   11632 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:21:10.034967   11632 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:21:10.053328   11632 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 10:21:10.064992   11632 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:21:10.075433   11632 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 10:21:10.075487   11632 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 10:21:10.089435   11632 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:21:10.099832   11632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:10.225686   11632 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 10:21:10.317027   11632 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 10:21:10.317122   11632 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 10:21:10.322129   11632 start.go:563] Will wait 60s for crictl version
	I0930 10:21:10.322219   11632 ssh_runner.go:195] Run: which crictl
	I0930 10:21:10.326023   11632 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 10:21:10.370638   11632 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 10:21:10.370766   11632 ssh_runner.go:195] Run: crio --version
	I0930 10:21:10.399311   11632 ssh_runner.go:195] Run: crio --version
	I0930 10:21:10.430559   11632 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 10:21:10.432220   11632 main.go:141] libmachine: (addons-967811) Calling .GetIP
	I0930 10:21:10.434877   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:10.435197   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:10.435222   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:10.435415   11632 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 10:21:10.439750   11632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:21:10.452887   11632 kubeadm.go:883] updating cluster {Name:addons-967811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-967811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:21:10.453006   11632 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 10:21:10.453064   11632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:21:10.486910   11632 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 10:21:10.486992   11632 ssh_runner.go:195] Run: which lz4
	I0930 10:21:10.491411   11632 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 10:21:10.495746   11632 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 10:21:10.495782   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 10:21:11.819457   11632 crio.go:462] duration metric: took 1.328096969s to copy over tarball
	I0930 10:21:11.819526   11632 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 10:21:14.006728   11632 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.187171474s)
	I0930 10:21:14.006757   11632 crio.go:469] duration metric: took 2.187267959s to extract the tarball
	I0930 10:21:14.006764   11632 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 10:21:14.043737   11632 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 10:21:14.085580   11632 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 10:21:14.085603   11632 cache_images.go:84] Images are preloaded, skipping loading
	I0930 10:21:14.085611   11632 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.1 crio true true} ...
	I0930 10:21:14.085748   11632 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-967811 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-967811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 10:21:14.085830   11632 ssh_runner.go:195] Run: crio config
	I0930 10:21:14.133123   11632 cni.go:84] Creating CNI manager for ""
	I0930 10:21:14.133147   11632 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 10:21:14.133156   11632 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:21:14.133178   11632 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-967811 NodeName:addons-967811 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:21:14.133297   11632 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-967811"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:21:14.133355   11632 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:21:14.144263   11632 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 10:21:14.144348   11632 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:21:14.154978   11632 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 10:21:14.173551   11632 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:21:14.192378   11632 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0930 10:21:14.211226   11632 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I0930 10:21:14.215473   11632 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:21:14.229041   11632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:14.352167   11632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:21:14.369826   11632 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811 for IP: 192.168.39.187
	I0930 10:21:14.369859   11632 certs.go:194] generating shared ca certs ...
	I0930 10:21:14.369877   11632 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.370046   11632 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 10:21:14.528308   11632 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt ...
	I0930 10:21:14.528344   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt: {Name:mk44a50e652a1245861ea4950acea8a23a35cf9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.528532   11632 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key ...
	I0930 10:21:14.528546   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key: {Name:mkb83db502c4782e88e88fa168a3231d3f126793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.528626   11632 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 10:21:14.668875   11632 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt ...
	I0930 10:21:14.668910   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt: {Name:mk517b4f33399160d5c1d89d4b2dd5e1dc8041ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.669121   11632 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key ...
	I0930 10:21:14.669134   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key: {Name:mk3943d6a7f23f4a5debbf19ad74f01de28c1346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.669217   11632 certs.go:256] generating profile certs ...
	I0930 10:21:14.669273   11632 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/client.key
	I0930 10:21:14.669294   11632 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/client.crt with IP's: []
	I0930 10:21:14.822142   11632 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/client.crt ...
	I0930 10:21:14.822185   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/client.crt: {Name:mk91bbf9c42708db8da5dd9bc6a362ed64d4807b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.822366   11632 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/client.key ...
	I0930 10:21:14.822378   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/client.key: {Name:mkc97f6d3362cd6804089821d284a5041e96a067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.822452   11632 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.key.b82f8186
	I0930 10:21:14.822472   11632 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.crt.b82f8186 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187]
	I0930 10:21:14.977608   11632 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.crt.b82f8186 ...
	I0930 10:21:14.977652   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.crt.b82f8186: {Name:mk0755b930293a7f9c05b07e593d13091f121c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.977816   11632 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.key.b82f8186 ...
	I0930 10:21:14.977828   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.key.b82f8186: {Name:mk8d055b9dc277a7b97e6c5eb9c8d9c4fa1efe89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:14.977901   11632 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.crt.b82f8186 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.crt
	I0930 10:21:14.977977   11632 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.key.b82f8186 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.key
	I0930 10:21:14.978023   11632 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.key
	I0930 10:21:14.978040   11632 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.crt with IP's: []
	I0930 10:21:15.139150   11632 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.crt ...
	I0930 10:21:15.139181   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.crt: {Name:mk4286d3c87444bc173add484b26fc04cb8b0936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:15.139370   11632 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.key ...
	I0930 10:21:15.139384   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.key: {Name:mk5e6c6da8057389dbef42522389f4480a6ed9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:15.139581   11632 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 10:21:15.139614   11632 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 10:21:15.139636   11632 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:21:15.139657   11632 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 10:21:15.140195   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:21:15.172133   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 10:21:15.202562   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:21:15.227270   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 10:21:15.252985   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 10:21:15.277386   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 10:21:15.306346   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:21:15.333540   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/addons-967811/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 10:21:15.361774   11632 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:21:15.386218   11632 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:21:15.404103   11632 ssh_runner.go:195] Run: openssl version
	I0930 10:21:15.410100   11632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:21:15.421302   11632 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:15.426287   11632 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:15.426337   11632 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:15.432073   11632 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:21:15.442590   11632 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:21:15.446919   11632 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:21:15.446969   11632 kubeadm.go:392] StartCluster: {Name:addons-967811 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-967811 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:21:15.447031   11632 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 10:21:15.447070   11632 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 10:21:15.485300   11632 cri.go:89] found id: ""
	I0930 10:21:15.485377   11632 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:21:15.496351   11632 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:21:15.506071   11632 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:21:15.516034   11632 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:21:15.516057   11632 kubeadm.go:157] found existing configuration files:
	
	I0930 10:21:15.516232   11632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:21:15.525593   11632 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:21:15.525667   11632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:21:15.534907   11632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:21:15.543770   11632 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:21:15.543834   11632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:21:15.553311   11632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:21:15.562361   11632 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:21:15.562414   11632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:21:15.572061   11632 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:21:15.581544   11632 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:21:15.581626   11632 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:21:15.591197   11632 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 10:21:15.645418   11632 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:21:15.645674   11632 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:21:15.751192   11632 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:21:15.751364   11632 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:21:15.751491   11632 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:21:15.759784   11632 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:21:15.863371   11632 out.go:235]   - Generating certificates and keys ...
	I0930 10:21:15.863479   11632 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:21:15.863558   11632 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:21:15.936146   11632 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:21:16.160512   11632 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:21:16.573253   11632 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:21:16.767766   11632 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:21:16.993103   11632 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:21:16.993272   11632 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-967811 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0930 10:21:17.062607   11632 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:21:17.062751   11632 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-967811 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0930 10:21:17.459582   11632 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:21:17.881753   11632 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:21:17.936205   11632 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:21:17.936272   11632 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:21:18.004050   11632 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:21:18.278929   11632 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:21:18.391300   11632 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:21:18.558563   11632 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:21:18.817209   11632 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:21:18.817535   11632 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:21:18.819997   11632 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:21:18.822191   11632 out.go:235]   - Booting up control plane ...
	I0930 10:21:18.822290   11632 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:21:18.822362   11632 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:21:18.822428   11632 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:21:18.841720   11632 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:21:18.850974   11632 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:21:18.851054   11632 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:21:18.976759   11632 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:21:18.976897   11632 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:21:19.477514   11632 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.226187ms
	I0930 10:21:19.477595   11632 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:21:24.978484   11632 kubeadm.go:310] [api-check] The API server is healthy after 5.501792475s
	I0930 10:21:24.992122   11632 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:21:25.020371   11632 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:21:25.065233   11632 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:21:25.065486   11632 kubeadm.go:310] [mark-control-plane] Marking the node addons-967811 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:21:25.081462   11632 kubeadm.go:310] [bootstrap-token] Using token: 4m4rek.ifpjf7slu0jo5rr4
	I0930 10:21:25.083347   11632 out.go:235]   - Configuring RBAC rules ...
	I0930 10:21:25.083507   11632 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:21:25.100364   11632 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:21:25.114795   11632 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:21:25.120636   11632 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:21:25.125352   11632 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:21:25.129577   11632 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:21:25.383838   11632 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:21:25.823797   11632 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:21:26.384083   11632 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:21:26.384122   11632 kubeadm.go:310] 
	I0930 10:21:26.384209   11632 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:21:26.384221   11632 kubeadm.go:310] 
	I0930 10:21:26.384315   11632 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:21:26.384336   11632 kubeadm.go:310] 
	I0930 10:21:26.384378   11632 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:21:26.384481   11632 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:21:26.384543   11632 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:21:26.384550   11632 kubeadm.go:310] 
	I0930 10:21:26.384603   11632 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:21:26.384609   11632 kubeadm.go:310] 
	I0930 10:21:26.384661   11632 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:21:26.384670   11632 kubeadm.go:310] 
	I0930 10:21:26.384731   11632 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:21:26.384834   11632 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:21:26.384930   11632 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:21:26.384939   11632 kubeadm.go:310] 
	I0930 10:21:26.385018   11632 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:21:26.385088   11632 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:21:26.385094   11632 kubeadm.go:310] 
	I0930 10:21:26.385179   11632 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4m4rek.ifpjf7slu0jo5rr4 \
	I0930 10:21:26.385308   11632 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 \
	I0930 10:21:26.385338   11632 kubeadm.go:310] 	--control-plane 
	I0930 10:21:26.385348   11632 kubeadm.go:310] 
	I0930 10:21:26.385446   11632 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:21:26.385456   11632 kubeadm.go:310] 
	I0930 10:21:26.385560   11632 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4m4rek.ifpjf7slu0jo5rr4 \
	I0930 10:21:26.385718   11632 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 
	I0930 10:21:26.386433   11632 kubeadm.go:310] W0930 10:21:15.628417     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:26.386721   11632 kubeadm.go:310] W0930 10:21:15.629400     817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:26.386833   11632 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:21:26.386859   11632 cni.go:84] Creating CNI manager for ""
	I0930 10:21:26.386873   11632 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 10:21:26.388598   11632 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 10:21:26.390042   11632 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 10:21:26.401722   11632 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 10:21:26.420256   11632 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:21:26.420333   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:26.420380   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-967811 minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-967811 minikube.k8s.io/primary=true
	I0930 10:21:26.456418   11632 ops.go:34] apiserver oom_adj: -16
	I0930 10:21:26.580363   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:27.080531   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:27.581410   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:28.081280   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:28.581279   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:29.080539   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:29.581414   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:30.081353   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:30.580629   11632 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:30.679507   11632 kubeadm.go:1113] duration metric: took 4.259237067s to wait for elevateKubeSystemPrivileges
	I0930 10:21:30.679543   11632 kubeadm.go:394] duration metric: took 15.232577791s to StartCluster
	I0930 10:21:30.679559   11632 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:30.679668   11632 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 10:21:30.680624   11632 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:30.680932   11632 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 10:21:30.681307   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:21:30.681108   11632 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:21:30.681652   11632 config.go:182] Loaded profile config "addons-967811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:21:30.681706   11632 addons.go:69] Setting yakd=true in profile "addons-967811"
	I0930 10:21:30.681720   11632 addons.go:69] Setting cloud-spanner=true in profile "addons-967811"
	I0930 10:21:30.681733   11632 addons.go:234] Setting addon cloud-spanner=true in "addons-967811"
	I0930 10:21:30.681741   11632 addons.go:69] Setting metrics-server=true in profile "addons-967811"
	I0930 10:21:30.681755   11632 addons.go:234] Setting addon metrics-server=true in "addons-967811"
	I0930 10:21:30.681752   11632 addons.go:69] Setting ingress=true in profile "addons-967811"
	I0930 10:21:30.681770   11632 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-967811"
	I0930 10:21:30.681774   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.681775   11632 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-967811"
	I0930 10:21:30.681771   11632 addons.go:69] Setting storage-provisioner=true in profile "addons-967811"
	I0930 10:21:30.681779   11632 addons.go:69] Setting default-storageclass=true in profile "addons-967811"
	I0930 10:21:30.681770   11632 addons.go:69] Setting ingress-dns=true in profile "addons-967811"
	I0930 10:21:30.681799   11632 addons.go:234] Setting addon storage-provisioner=true in "addons-967811"
	I0930 10:21:30.681799   11632 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-967811"
	I0930 10:21:30.681799   11632 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-967811"
	I0930 10:21:30.681805   11632 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-967811"
	I0930 10:21:30.681810   11632 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-967811"
	I0930 10:21:30.681781   11632 addons.go:234] Setting addon ingress=true in "addons-967811"
	I0930 10:21:30.681819   11632 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-967811"
	I0930 10:21:30.681752   11632 addons.go:69] Setting gcp-auth=true in profile "addons-967811"
	I0930 10:21:30.681827   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.681839   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.681850   11632 addons.go:69] Setting registry=true in profile "addons-967811"
	I0930 10:21:30.681865   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.681887   11632 addons.go:234] Setting addon registry=true in "addons-967811"
	I0930 10:21:30.681909   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.682259   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.682275   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.682281   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.682287   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.682293   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.681841   11632 mustload.go:65] Loading cluster: addons-967811
	I0930 10:21:30.682307   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.682312   11632 addons.go:69] Setting volumesnapshots=true in profile "addons-967811"
	I0930 10:21:30.682325   11632 addons.go:234] Setting addon volumesnapshots=true in "addons-967811"
	I0930 10:21:30.682421   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.682443   11632 config.go:182] Loaded profile config "addons-967811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 10:21:30.682492   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.681733   11632 addons.go:234] Setting addon yakd=true in "addons-967811"
	I0930 10:21:30.682592   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.682769   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.682792   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.682834   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.681812   11632 addons.go:234] Setting addon ingress-dns=true in "addons-967811"
	I0930 10:21:30.682325   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.682856   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.682867   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.681823   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.682930   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.682947   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.682980   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.682991   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.681791   11632 addons.go:69] Setting volcano=true in profile "addons-967811"
	I0930 10:21:30.683103   11632 addons.go:234] Setting addon volcano=true in "addons-967811"
	I0930 10:21:30.683115   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.683132   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.683135   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.683183   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.681764   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.683463   11632 out.go:177] * Verifying Kubernetes components...
	I0930 10:21:30.681712   11632 addons.go:69] Setting inspektor-gadget=true in profile "addons-967811"
	I0930 10:21:30.683678   11632 addons.go:234] Setting addon inspektor-gadget=true in "addons-967811"
	I0930 10:21:30.683711   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.682301   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.682851   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.684072   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.684086   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.684114   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.685138   11632 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:30.704014   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0930 10:21:30.718328   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.718360   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.718381   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.718402   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.718722   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33615
	I0930 10:21:30.718779   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.718819   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.720569   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39949
	I0930 10:21:30.721062   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.721173   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.721257   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.721630   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.721664   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.721813   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.721833   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.721966   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.721985   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.722357   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.722414   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.722474   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.722519   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.722645   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.723022   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.725167   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.725602   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.725656   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.731958   11632 addons.go:234] Setting addon default-storageclass=true in "addons-967811"
	I0930 10:21:30.732003   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.732414   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.732463   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.733767   11632 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-967811"
	I0930 10:21:30.733805   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:30.734161   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.734208   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.745078   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I0930 10:21:30.745718   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.746690   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.746709   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.747036   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.747561   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.747598   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.756278   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0930 10:21:30.756441   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43157
	I0930 10:21:30.756867   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.757353   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.757371   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.757447   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.757748   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.758285   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.758324   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.758600   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0930 10:21:30.759136   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.759168   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.759892   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.760461   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.760496   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.760727   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.760810   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0930 10:21:30.762134   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.762302   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.762313   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.762811   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.762827   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.762957   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.763246   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.763662   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.763699   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.763886   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34701
	I0930 10:21:30.764254   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.764286   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.776529   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.776659   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0930 10:21:30.776797   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0930 10:21:30.776871   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32815
	I0930 10:21:30.776947   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0930 10:21:30.777958   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.778054   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41541
	I0930 10:21:30.778146   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.778318   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.778329   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.779961   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.780036   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.780042   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.780053   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.780114   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.780143   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.780161   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.780281   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.780512   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.780528   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.780535   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.780574   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.781107   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44509
	I0930 10:21:30.781161   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.781193   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.781239   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34757
	I0930 10:21:30.781329   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.781654   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.781711   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.781920   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.781958   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.782097   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.782115   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.782552   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.783085   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.783129   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.783288   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.783308   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.783681   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.783720   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0930 10:21:30.784389   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.784422   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.784869   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.784947   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.785008   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.785482   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.786070   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.786086   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.786497   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.787003   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.787020   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.787114   11632 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 10:21:30.787960   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.788537   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.788548   11632 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:21:30.788563   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 10:21:30.788568   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.788580   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.788880   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.788894   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.789308   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.791854   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42891
	I0930 10:21:30.791925   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0930 10:21:30.791985   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.792344   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.792362   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.792713   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.792883   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.793050   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.793192   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.801012   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38165
	I0930 10:21:30.801768   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.802409   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.802429   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.802796   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.802975   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.804724   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.805742   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0930 10:21:30.806134   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.806571   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.806595   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.806962   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.807167   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.807171   11632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:30.807867   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39757
	I0930 10:21:30.808285   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.808744   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.808763   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.809144   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.809196   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.809397   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.811326   11632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 10:21:30.811332   11632 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:21:30.811366   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.811398   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0930 10:21:30.811762   11632 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:30.811779   11632 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:21:30.811795   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.811987   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.812655   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.812673   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.813004   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.813115   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I0930 10:21:30.813196   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.813492   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0930 10:21:30.814631   11632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:30.814729   11632 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:21:30.814742   11632 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:21:30.814761   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.814872   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.814948   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0930 10:21:30.815715   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:30.815732   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:30.815903   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:30.815929   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:30.815935   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:30.815943   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:30.815948   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:30.816892   11632 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:21:30.816910   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 10:21:30.816926   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.816984   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.817001   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.817016   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.817036   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:30.817053   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:30.817059   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 10:21:30.817123   11632 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0930 10:21:30.817901   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.817936   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.818182   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.818210   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.818235   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.818283   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.818485   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.818540   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.818627   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.818687   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.818979   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.819585   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.819967   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.819990   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.820057   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.820179   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.820278   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.820385   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.822203   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.822249   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.822752   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.822783   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.822986   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.823074   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.823173   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.823232   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.823444   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.823457   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.823518   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.823679   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.823690   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.823805   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.823815   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.823869   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.824027   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.824066   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.824101   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.824719   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.824736   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.824790   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.824900   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.824909   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.824955   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.825664   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.825707   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.826374   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.826409   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.826966   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:30.826998   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:30.827201   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.827217   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.827268   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.829030   11632 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:21:30.829032   11632 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:21:30.829033   11632 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:21:30.830921   11632 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:21:30.830928   11632 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:21:30.830946   11632 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:21:30.830966   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.831098   11632 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:30.831110   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:21:30.831127   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.832698   11632 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:21:30.832712   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:21:30.832729   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.835298   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.835400   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.835940   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.835960   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.836038   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.836063   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.836205   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.836262   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.836487   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.836490   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.836652   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.836777   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.837071   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.837127   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.837231   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.837613   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.837653   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.837826   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.838012   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.838142   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.838256   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.843310   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0930 10:21:30.843841   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.844452   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.844476   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.844826   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.844979   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.846744   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.848323   11632 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:21:30.849875   11632 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:21:30.851017   11632 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:30.851035   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:21:30.851057   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.851224   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34221
	I0930 10:21:30.851242   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0930 10:21:30.851249   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33351
	I0930 10:21:30.851328   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45205
	I0930 10:21:30.851738   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.851739   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.852192   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.852214   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.852341   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.852358   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.852415   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.852483   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.852556   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.852725   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.852937   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.853073   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.853087   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.853335   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.853427   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.853778   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.853928   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.853945   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.854296   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.854729   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.855254   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.856677   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.856702   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.856703   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.857105   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:21:30.857211   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.857238   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.857338   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.857462   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.857601   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.857763   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.857850   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.858369   11632 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:21:30.858376   11632 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:21:30.858414   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:21:30.858386   11632 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:21:30.858433   11632 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:21:30.858440   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.859099   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0930 10:21:30.859903   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:30.859919   11632 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:21:30.859932   11632 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:21:30.859941   11632 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:30.859949   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:21:30.859949   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.859962   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.860530   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:30.860552   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:30.860923   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:30.861058   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:21:30.861205   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:30.861890   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.862428   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.862449   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.862662   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.862821   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.862934   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.863037   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.863570   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:21:30.863683   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.863769   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:30.864183   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.864306   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.864344   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.864491   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.864499   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.864664   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.864815   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.865055   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.864889   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.865108   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.865177   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.865279   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.865388   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.865442   11632 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:21:30.866106   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:21:30.867317   11632 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:30.867331   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:21:30.867341   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.867380   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:21:30.868346   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:21:30.869325   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 10:21:30.869953   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.870314   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.870334   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.870512   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.870641   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.870732   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.870813   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:30.871375   11632 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:21:30.872492   11632 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:21:30.872505   11632 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:21:30.872519   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:30.875225   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.875608   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:30.875631   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:30.875808   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:30.875939   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:30.876026   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:30.876100   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:31.259673   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:21:31.264683   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:21:31.285537   11632 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:21:31.285566   11632 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:21:31.321198   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:21:31.321213   11632 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:21:31.329977   11632 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:21:31.330000   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:21:31.340143   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:31.359582   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:31.409721   11632 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:21:31.409752   11632 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:21:31.411966   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:31.435395   11632 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:21:31.435425   11632 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:21:31.439032   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:31.478936   11632 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:21:31.478961   11632 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:21:31.482483   11632 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:21:31.482505   11632 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:21:31.500069   11632 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:21:31.500103   11632 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:21:31.500399   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:31.562818   11632 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:21:31.562841   11632 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:21:31.667554   11632 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:21:31.667583   11632 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:21:31.674953   11632 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:21:31.674982   11632 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:21:31.677272   11632 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:31.677293   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:21:31.747357   11632 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:21:31.747380   11632 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:21:31.757343   11632 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:21:31.757368   11632 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:21:31.767546   11632 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:31.767575   11632 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:21:31.906025   11632 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:21:31.906048   11632 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:21:31.912238   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:31.944476   11632 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:21:31.944501   11632 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:21:31.947507   11632 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:21:31.947529   11632 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:21:31.970566   11632 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:21:31.970597   11632 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:21:32.003435   11632 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:21:32.003459   11632 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:21:32.019067   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:32.130295   11632 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:21:32.130338   11632 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:21:32.130434   11632 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:32.130454   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:21:32.223238   11632 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:21:32.223274   11632 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:21:32.242642   11632 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:21:32.242681   11632 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:21:32.265488   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:32.281569   11632 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:32.281596   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:21:32.522197   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:32.563930   11632 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:21:32.563958   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:21:32.571532   11632 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:21:32.571560   11632 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:21:32.725021   11632 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:21:32.725053   11632 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:21:32.825135   11632 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:21:32.825167   11632 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:21:33.078932   11632 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:33.078960   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:21:33.103828   11632 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:21:33.103850   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:21:33.394908   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:33.417841   11632 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:21:33.417871   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:21:33.764197   11632 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:33.764223   11632 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:21:34.128289   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:37.896357   11632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:21:37.896399   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:37.899659   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:37.900076   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:37.900103   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:37.900365   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:37.900574   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:37.900787   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:37.900935   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:38.284491   11632 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:21:38.403839   11632 addons.go:234] Setting addon gcp-auth=true in "addons-967811"
	I0930 10:21:38.403902   11632 host.go:66] Checking if "addons-967811" exists ...
	I0930 10:21:38.404246   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:38.404294   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:38.420788   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
	I0930 10:21:38.421257   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:38.421772   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:38.421796   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:38.422187   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:38.422771   11632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 10:21:38.422821   11632 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 10:21:38.439847   11632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I0930 10:21:38.440361   11632 main.go:141] libmachine: () Calling .GetVersion
	I0930 10:21:38.440853   11632 main.go:141] libmachine: Using API Version  1
	I0930 10:21:38.440874   11632 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 10:21:38.441257   11632 main.go:141] libmachine: () Calling .GetMachineName
	I0930 10:21:38.441425   11632 main.go:141] libmachine: (addons-967811) Calling .GetState
	I0930 10:21:38.443336   11632 main.go:141] libmachine: (addons-967811) Calling .DriverName
	I0930 10:21:38.443562   11632 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:21:38.443581   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHHostname
	I0930 10:21:38.446232   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:38.446672   11632 main.go:141] libmachine: (addons-967811) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:3e:c1", ip: ""} in network mk-addons-967811: {Iface:virbr1 ExpiryTime:2024-09-30 11:20:55 +0000 UTC Type:0 Mac:52:54:00:88:3e:c1 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-967811 Clientid:01:52:54:00:88:3e:c1}
	I0930 10:21:38.446702   11632 main.go:141] libmachine: (addons-967811) DBG | domain addons-967811 has defined IP address 192.168.39.187 and MAC address 52:54:00:88:3e:c1 in network mk-addons-967811
	I0930 10:21:38.446911   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHPort
	I0930 10:21:38.447127   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHKeyPath
	I0930 10:21:38.447294   11632 main.go:141] libmachine: (addons-967811) Calling .GetSSHUsername
	I0930 10:21:38.447498   11632 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/addons-967811/id_rsa Username:docker}
	I0930 10:21:40.128768   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.869058625s)
	I0930 10:21:40.128827   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.128840   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.128863   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.864150771s)
	I0930 10:21:40.128901   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.128919   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.128936   11632 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.807637652s)
	I0930 10:21:40.128993   11632 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.807757831s)
	I0930 10:21:40.129013   11632 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 10:21:40.129043   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.788875083s)
	I0930 10:21:40.129094   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129109   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129175   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.129178   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.769565288s)
	I0930 10:21:40.129201   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129209   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129226   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.129238   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.129246   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129254   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129254   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.690195965s)
	I0930 10:21:40.129275   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129227   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.717238382s)
	I0930 10:21:40.129284   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.129284   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129309   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129317   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129342   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.129350   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.129350   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.628932606s)
	I0930 10:21:40.129357   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129364   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129367   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129376   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129426   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.129439   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.217175961s)
	I0930 10:21:40.129447   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.129454   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129455   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.129461   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129464   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129471   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129570   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.110471459s)
	I0930 10:21:40.129590   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129602   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129692   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.864169906s)
	I0930 10:21:40.129705   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.129713   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.129854   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.607616075s)
	W0930 10:21:40.129882   11632 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:40.129899   11632 retry.go:31] will retry after 268.671254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:40.129949   11632 node_ready.go:35] waiting up to 6m0s for node "addons-967811" to be "Ready" ...
	I0930 10:21:40.129991   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.735049909s)
	I0930 10:21:40.130006   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.130013   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.130078   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.130093   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.130102   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.130115   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.130127   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.130145   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.130152   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.130162   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.130168   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.130115   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.130186   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.130189   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.130197   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.130200   11632 addons.go:475] Verifying addon ingress=true in "addons-967811"
	I0930 10:21:40.130207   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.130215   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.130222   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.130230   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.130233   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.130236   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.133143   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.133174   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.133180   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.133399   11632 out.go:177] * Verifying ingress addon...
	I0930 10:21:40.133711   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.133741   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.133747   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.133993   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.134023   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.134030   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.134038   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.134044   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.134079   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.134089   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.134105   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.134113   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.134120   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.134126   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.134106   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.134190   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.134198   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.134205   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.134683   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.134687   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.134697   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.134705   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.134711   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.134716   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.134723   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.134758   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.134778   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.134784   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.134792   11632 addons.go:475] Verifying addon registry=true in "addons-967811"
	I0930 10:21:40.135162   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.135191   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.135198   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.135214   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.135220   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.135265   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.135288   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.135293   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.135402   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.135421   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.135427   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.135434   11632 addons.go:475] Verifying addon metrics-server=true in "addons-967811"
	I0930 10:21:40.135638   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.135657   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.135668   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.135678   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.135682   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.135688   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.136511   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.136536   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.136549   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.136764   11632 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-967811 service yakd-dashboard -n yakd-dashboard
	
	I0930 10:21:40.136977   11632 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 10:21:40.137153   11632 out.go:177] * Verifying registry addon...
	I0930 10:21:40.139931   11632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:21:40.164800   11632 node_ready.go:49] node "addons-967811" has status "Ready":"True"
	I0930 10:21:40.164828   11632 node_ready.go:38] duration metric: took 34.856617ms for node "addons-967811" to be "Ready" ...
	I0930 10:21:40.164839   11632 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:21:40.207158   11632 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 10:21:40.207186   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:40.219406   11632 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:21:40.219430   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:40.250835   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.250856   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.251152   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.251234   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.251250   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	W0930 10:21:40.251354   11632 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 10:21:40.263114   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:40.263135   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:40.263429   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:40.263464   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:40.263476   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:40.311098   11632 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2rvrx" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:40.398784   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:40.636693   11632 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-967811" context rescaled to 1 replicas
	I0930 10:21:40.657380   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:40.661662   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:41.207743   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:41.210593   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:41.335805   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.207460837s)
	I0930 10:21:41.335861   11632 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.892278829s)
	I0930 10:21:41.335867   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:41.336005   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:41.336247   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:41.336325   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:41.336344   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:41.336359   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:41.336368   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:41.336570   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:41.336587   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:41.336598   11632 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-967811"
	I0930 10:21:41.336611   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:41.337469   11632 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:41.338196   11632 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:21:41.339810   11632 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:21:41.340467   11632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:21:41.340966   11632 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:21:41.340986   11632 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:21:41.386596   11632 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:21:41.386615   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:41.555627   11632 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:21:41.555654   11632 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:21:41.631732   11632 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:41.631754   11632 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:21:41.641943   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:41.643829   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:41.780379   11632 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:41.846455   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:42.142460   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:42.144506   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:42.316949   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-2rvrx" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:42.345333   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:42.641219   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:42.647206   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:42.846747   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:42.889525   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.490703648s)
	I0930 10:21:42.889574   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:42.889587   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:42.889854   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:42.889902   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:42.889911   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:42.889926   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:42.889934   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:42.890167   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:42.890184   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:42.890196   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:43.145419   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:43.146593   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.355235   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:43.481504   11632 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.701059685s)
	I0930 10:21:43.481573   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:43.481589   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:43.481963   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:43.481983   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:43.481998   11632 main.go:141] libmachine: Making call to close driver server
	I0930 10:21:43.482014   11632 main.go:141] libmachine: (addons-967811) Calling .Close
	I0930 10:21:43.483510   11632 main.go:141] libmachine: (addons-967811) DBG | Closing plugin on server side
	I0930 10:21:43.483534   11632 main.go:141] libmachine: Successfully made call to close driver server
	I0930 10:21:43.483549   11632 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 10:21:43.484556   11632 addons.go:475] Verifying addon gcp-auth=true in "addons-967811"
	I0930 10:21:43.486217   11632 out.go:177] * Verifying gcp-auth addon...
	I0930 10:21:43.488304   11632 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:21:43.507786   11632 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:21:43.507817   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:43.652005   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.652195   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:43.847570   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:43.992362   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:44.141542   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:44.143981   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.317418   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-2rvrx" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:44.347619   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:44.492354   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:44.641779   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:44.643246   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.847349   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:44.991887   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:45.156849   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.158405   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:45.347284   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.492271   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:45.641080   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:45.642894   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.845841   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.994455   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:46.142172   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:46.144615   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.318761   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-2rvrx" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:46.346919   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.494624   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:46.641752   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:46.643667   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.844970   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.992316   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:47.141049   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:47.142892   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:47.345198   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.492053   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:47.641452   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:47.642982   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:47.845122   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.991850   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:48.141773   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:48.143411   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:48.320682   11632 pod_ready.go:98] pod "coredns-7c65d6cfc9-2rvrx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:48 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.187 HostIPs:[{IP:192.168.39
.187}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-30 10:21:31 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-30 10:21:34 +0000 UTC,FinishedAt:2024-09-30 10:21:45 +0000 UTC,ContainerID:cri-o://32140834ee9abbe748a2ba14d095fc617926cb3e05bac64f4efc92f628cee6e2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://32140834ee9abbe748a2ba14d095fc617926cb3e05bac64f4efc92f628cee6e2 Started:0xc002696100 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021896b0} {Name:kube-api-access-qhlvd MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0021896c0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0930 10:21:48.320712   11632 pod_ready.go:82] duration metric: took 8.009577627s for pod "coredns-7c65d6cfc9-2rvrx" in "kube-system" namespace to be "Ready" ...
	E0930 10:21:48.320724   11632 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-2rvrx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:48 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.187 HostIPs:[{IP:192.168.39.187}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-30 10:21:31 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-30 10:21:34 +0000 UTC,FinishedAt:2024-09-30 10:21:45 +0000 UTC,ContainerID:cri-o://32140834ee9abbe748a2ba14d095fc617926cb3e05bac64f4efc92f628cee6e2,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://32140834ee9abbe748a2ba14d095fc617926cb3e05bac64f4efc92f628cee6e2 Started:0xc002696100 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021896b0} {Name:kube-api-access-qhlvd MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0021896c0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0930 10:21:48.320732   11632 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:48.347892   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.492810   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:48.641443   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:48.643522   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:48.845903   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.992221   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:49.141231   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:49.143868   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:49.345098   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:49.494703   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:49.641264   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:49.644156   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:49.845499   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:49.992236   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:50.141801   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:50.143927   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:50.327844   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:50.346682   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.492538   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:50.641024   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:50.643069   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:50.844865   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.992784   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:51.141608   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:51.143160   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:51.345548   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.492213   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:51.641924   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:51.644288   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:51.848748   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.991951   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:52.141657   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:52.144275   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:52.347415   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.493109   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:52.641838   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:52.643198   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:52.829165   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:52.851022   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.992551   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:53.142410   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:53.143777   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:53.345431   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.492343   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:53.641553   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:53.643337   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:53.845221   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.991859   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:54.141849   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:54.143391   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:54.346249   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:54.492140   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:54.642102   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:54.644311   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:54.847635   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:54.992100   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:55.141088   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:55.142742   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:55.327479   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:55.345878   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.492549   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:55.642049   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:55.644627   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:55.845783   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.991814   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:56.141295   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:56.142886   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:56.345811   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.492640   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:56.640958   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:56.643192   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:56.848884   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.991999   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:57.141952   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:57.143119   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:57.346136   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.496180   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:57.642729   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:57.643571   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:57.826785   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:57.845326   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.994848   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:58.141665   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:58.143068   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:58.346167   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.492826   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:58.641766   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:58.643468   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:58.845709   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.992237   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:59.142704   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:59.143827   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:59.346079   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.492247   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:21:59.641837   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:59.643706   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:59.828986   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:59.845969   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.992373   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:00.142177   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:00.143687   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:00.369230   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.493767   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:00.644379   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:00.650679   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:00.847805   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.992303   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:01.140975   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:01.142751   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:01.345360   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.505588   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:01.641317   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:01.642910   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:01.845080   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.991373   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:02.142166   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:02.144059   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:02.327472   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:02.345448   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.492946   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:02.643331   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:02.652396   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:02.846286   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.992557   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:03.141575   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:03.143580   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:03.352331   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.493027   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:03.642782   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:03.643444   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:03.846034   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.992646   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:04.141240   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:04.143518   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:04.328820   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:04.344995   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.493553   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:04.641861   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:04.643826   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:04.846082   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.992561   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:05.141685   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:05.143906   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:05.346082   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.494978   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:05.642266   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:05.643228   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:05.848081   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.991860   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:06.141276   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:06.143842   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:06.347064   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:06.492967   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:06.641770   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:06.643699   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:07.177584   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:07.178008   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.179011   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:07.179231   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:07.180974   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:07.346375   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.493312   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:07.641301   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:07.643229   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:07.846469   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.992323   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:08.140732   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:08.143100   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:08.344963   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.493470   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:08.642279   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:08.647345   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:08.847135   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.993895   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:09.141832   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:09.143778   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:09.326895   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:09.344524   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:09.491613   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:09.641674   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:09.643901   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:09.845039   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:09.992696   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:10.141446   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:10.143259   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:10.344881   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:10.492856   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:10.642432   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:10.644469   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:10.845441   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:10.991825   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:11.141468   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:11.143540   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:11.345463   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:11.492943   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:11.641520   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:11.643019   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:11.854593   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:11.856866   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:11.993400   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:12.142322   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:12.144356   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:12.346072   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:12.492313   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:12.641678   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:12.643396   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:12.846808   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:12.991960   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:13.141554   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:13.143305   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:13.346416   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:13.492599   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:13.641825   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:13.643457   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:13.845102   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:13.992616   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:14.142094   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:14.143701   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:14.326812   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:14.344766   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:14.495142   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:14.645087   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:14.646749   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:14.846006   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:14.992910   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:15.143179   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:15.144516   11632 kapi.go:107] duration metric: took 35.004582422s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:22:15.346290   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:15.492707   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:15.641702   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:15.845665   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:15.992432   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:16.141887   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:16.558065   11632 pod_ready.go:103] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:16.558146   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:16.559857   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:16.655637   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:16.826577   11632 pod_ready.go:93] pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace has status "Ready":"True"
	I0930 10:22:16.826605   11632 pod_ready.go:82] duration metric: took 28.505865349s for pod "coredns-7c65d6cfc9-p8fdk" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.826615   11632 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.833786   11632 pod_ready.go:93] pod "etcd-addons-967811" in "kube-system" namespace has status "Ready":"True"
	I0930 10:22:16.833806   11632 pod_ready.go:82] duration metric: took 7.184691ms for pod "etcd-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.833814   11632 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.839487   11632 pod_ready.go:93] pod "kube-apiserver-addons-967811" in "kube-system" namespace has status "Ready":"True"
	I0930 10:22:16.839504   11632 pod_ready.go:82] duration metric: took 5.68438ms for pod "kube-apiserver-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.839512   11632 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.849730   11632 pod_ready.go:93] pod "kube-controller-manager-addons-967811" in "kube-system" namespace has status "Ready":"True"
	I0930 10:22:16.849755   11632 pod_ready.go:82] duration metric: took 10.23514ms for pod "kube-controller-manager-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.849771   11632 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xdx5j" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.850352   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:16.854496   11632 pod_ready.go:93] pod "kube-proxy-xdx5j" in "kube-system" namespace has status "Ready":"True"
	I0930 10:22:16.854525   11632 pod_ready.go:82] duration metric: took 4.74547ms for pod "kube-proxy-xdx5j" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.854537   11632 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:16.994074   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:17.140995   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:17.224401   11632 pod_ready.go:93] pod "kube-scheduler-addons-967811" in "kube-system" namespace has status "Ready":"True"
	I0930 10:22:17.224426   11632 pod_ready.go:82] duration metric: took 369.881307ms for pod "kube-scheduler-addons-967811" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:17.224435   11632 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace to be "Ready" ...
	I0930 10:22:17.346204   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:17.493016   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:17.641675   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:17.844913   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:17.992461   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:18.141205   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:18.345557   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:18.493099   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:18.642338   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:18.844949   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:18.992674   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:19.142674   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:19.230834   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:19.346168   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:19.492056   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:19.642840   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:20.110230   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:20.111851   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:20.141153   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:20.345513   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:20.491959   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:20.642147   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:20.846310   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:20.992846   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:21.141914   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:21.231771   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:21.350059   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:21.494362   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:21.642894   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:21.844814   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:21.991778   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:22.141948   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:22.349557   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:22.492642   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:22.642422   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:22.845376   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:22.993752   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:23.141817   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:23.347443   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:23.492353   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:23.640833   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:23.732426   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:23.845830   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:23.991991   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:24.142622   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:24.346317   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:24.492487   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:24.642564   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:24.846839   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:24.991905   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:25.144301   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:25.346401   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:25.492824   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:25.641970   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:25.733334   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:25.845823   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:25.992427   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:26.144708   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:26.346031   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:26.493666   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:26.684237   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:26.846321   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:26.992437   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:27.141725   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:27.349319   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:27.492828   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:27.641393   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:27.737210   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:27.845466   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:27.992321   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:28.140834   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:28.345386   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:28.493530   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:28.641950   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:28.846796   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:28.994780   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:29.142264   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:29.345923   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:29.492933   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:29.642268   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:29.745784   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:29.850535   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:29.993179   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:30.143797   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:30.345718   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:30.492897   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:30.642211   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:30.847240   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:30.993015   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:31.142706   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:31.345578   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:31.497605   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:31.648803   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:31.846866   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:31.992429   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:32.141158   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:32.231185   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:32.353558   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:32.492902   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:32.641985   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:32.846047   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:32.993019   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:33.141391   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:33.346277   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:33.492235   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:33.647571   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:33.845095   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:34.386211   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:34.386385   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:34.386483   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:34.390516   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:34.492046   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:34.645479   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:34.845915   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:34.993467   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:35.145529   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:35.345461   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:35.492430   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:35.642579   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:35.848781   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:35.995360   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:36.141572   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:36.345409   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:36.491515   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:36.641196   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:36.732471   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:36.845497   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:36.991889   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:37.141961   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:37.346184   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:37.491655   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:37.641691   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:37.845316   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:37.991857   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:38.144723   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:38.357725   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:38.492972   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:38.641893   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:38.846140   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:38.992322   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:39.140869   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:39.233178   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:39.346407   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:39.493877   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:39.644600   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:39.846359   11632 kapi.go:107] duration metric: took 58.505887407s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:22:39.992209   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:40.141208   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:40.493287   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:40.641008   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:40.992310   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:41.141031   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:41.491745   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:41.641907   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:41.731736   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:41.991673   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:42.141477   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:42.493215   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:42.641081   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:42.992857   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:43.143014   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:43.494056   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:43.641591   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:43.731890   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:43.992305   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:44.141461   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:44.492986   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:44.644449   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:44.992492   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:45.142210   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:45.493575   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:45.643807   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:45.737416   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:45.993576   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:46.141909   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:46.494920   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:46.644473   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:47.002435   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:47.142295   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:47.495831   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:47.708701   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:47.992574   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:48.142531   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:48.230566   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:48.492223   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:48.640467   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:48.991809   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:49.141424   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:49.492769   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:49.641129   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:49.992797   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:50.141756   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:50.258167   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:50.492423   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:50.641742   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:51.093207   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:51.142064   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:51.492365   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:51.641126   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:51.992061   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:52.141557   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:52.492312   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:52.646299   11632 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:52.733243   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:52.994472   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:53.151485   11632 kapi.go:107] duration metric: took 1m13.014507672s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 10:22:53.493563   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:53.994979   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:54.826714   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:54.828961   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:54.992989   11632 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:55.494029   11632 kapi.go:107] duration metric: took 1m12.0057203s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:22:55.495910   11632 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-967811 cluster.
	I0930 10:22:55.497416   11632 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:22:55.498876   11632 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:22:55.500497   11632 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0930 10:22:55.501739   11632 addons.go:510] duration metric: took 1m24.820638523s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner nvidia-device-plugin metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0930 10:22:57.347769   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:22:59.734209   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:02.231567   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:04.731346   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:06.732005   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:09.232217   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:11.731266   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:13.733418   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:16.231116   11632 pod_ready.go:103] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"False"
	I0930 10:23:16.735332   11632 pod_ready.go:93] pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace has status "Ready":"True"
	I0930 10:23:16.735366   11632 pod_ready.go:82] duration metric: took 59.510923959s for pod "metrics-server-84c5f94fbc-r6p7j" in "kube-system" namespace to be "Ready" ...
	I0930 10:23:16.735381   11632 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kwm74" in "kube-system" namespace to be "Ready" ...
	I0930 10:23:16.743889   11632 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-kwm74" in "kube-system" namespace has status "Ready":"True"
	I0930 10:23:16.743919   11632 pod_ready.go:82] duration metric: took 8.529604ms for pod "nvidia-device-plugin-daemonset-kwm74" in "kube-system" namespace to be "Ready" ...
	I0930 10:23:16.743944   11632 pod_ready.go:39] duration metric: took 1m36.579092737s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:23:16.743965   11632 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:23:16.743999   11632 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 10:23:16.744059   11632 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 10:23:16.812709   11632 cri.go:89] found id: "329f2855c369d1ab3a6f4f0909651f908db3cef3d91e70309661d46153650a72"
	I0930 10:23:16.812733   11632 cri.go:89] found id: ""
	I0930 10:23:16.812742   11632 logs.go:276] 1 containers: [329f2855c369d1ab3a6f4f0909651f908db3cef3d91e70309661d46153650a72]
	I0930 10:23:16.812803   11632 ssh_runner.go:195] Run: which crictl
	I0930 10:23:16.820521   11632 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 10:23:16.820595   11632 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 10:23:16.862828   11632 cri.go:89] found id: "43d1e8f87adc2e8166f34b850ac4d8aef01390f0f4d728377fc4acb221bb4d8a"
	I0930 10:23:16.862855   11632 cri.go:89] found id: ""
	I0930 10:23:16.862865   11632 logs.go:276] 1 containers: [43d1e8f87adc2e8166f34b850ac4d8aef01390f0f4d728377fc4acb221bb4d8a]
	I0930 10:23:16.862932   11632 ssh_runner.go:195] Run: which crictl
	I0930 10:23:16.867449   11632 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 10:23:16.867527   11632 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 10:23:16.912021   11632 cri.go:89] found id: "a74a16e467338b2244bb2e40eb7c60cb62762cfdb8e283e14ccadc3cc2833486"
	I0930 10:23:16.912056   11632 cri.go:89] found id: ""
	I0930 10:23:16.912067   11632 logs.go:276] 1 containers: [a74a16e467338b2244bb2e40eb7c60cb62762cfdb8e283e14ccadc3cc2833486]
	I0930 10:23:16.912119   11632 ssh_runner.go:195] Run: which crictl
	I0930 10:23:16.916458   11632 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 10:23:16.916520   11632 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 10:23:16.964191   11632 cri.go:89] found id: "9f19b165e1ee88685c466320c590be992ef4ee96f4de3b21a9bc9822a53d0535"
	I0930 10:23:16.964217   11632 cri.go:89] found id: ""
	I0930 10:23:16.964226   11632 logs.go:276] 1 containers: [9f19b165e1ee88685c466320c590be992ef4ee96f4de3b21a9bc9822a53d0535]
	I0930 10:23:16.964276   11632 ssh_runner.go:195] Run: which crictl
	I0930 10:23:16.968601   11632 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 10:23:16.968683   11632 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 10:23:17.012765   11632 cri.go:89] found id: "0fcbc0b59bd98544aec10dbb57ea4f1aa9462224e7e89d089b2f1daa88ca09c6"
	I0930 10:23:17.012798   11632 cri.go:89] found id: ""
	I0930 10:23:17.012807   11632 logs.go:276] 1 containers: [0fcbc0b59bd98544aec10dbb57ea4f1aa9462224e7e89d089b2f1daa88ca09c6]
	I0930 10:23:17.012869   11632 ssh_runner.go:195] Run: which crictl
	I0930 10:23:17.017101   11632 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 10:23:17.017170   11632 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 10:23:17.070801   11632 cri.go:89] found id: "97032ac69af1ce8c88a837f83e156a405e8a3051e5534676f90e35c468ec4c7c"
	I0930 10:23:17.070828   11632 cri.go:89] found id: ""
	I0930 10:23:17.070836   11632 logs.go:276] 1 containers: [97032ac69af1ce8c88a837f83e156a405e8a3051e5534676f90e35c468ec4c7c]
	I0930 10:23:17.070891   11632 ssh_runner.go:195] Run: which crictl
	I0930 10:23:17.076559   11632 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 10:23:17.076630   11632 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 10:23:17.121465   11632 cri.go:89] found id: ""
	I0930 10:23:17.121499   11632 logs.go:276] 0 containers: []
	W0930 10:23:17.121510   11632 logs.go:278] No container was found matching "kindnet"
	I0930 10:23:17.121523   11632 logs.go:123] Gathering logs for dmesg ...
	I0930 10:23:17.121538   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 10:23:17.138580   11632 logs.go:123] Gathering logs for describe nodes ...
	I0930 10:23:17.138614   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0930 10:23:17.309512   11632 logs.go:123] Gathering logs for kube-apiserver [329f2855c369d1ab3a6f4f0909651f908db3cef3d91e70309661d46153650a72] ...
	I0930 10:23:17.309542   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 329f2855c369d1ab3a6f4f0909651f908db3cef3d91e70309661d46153650a72"
	I0930 10:23:17.370578   11632 logs.go:123] Gathering logs for coredns [a74a16e467338b2244bb2e40eb7c60cb62762cfdb8e283e14ccadc3cc2833486] ...
	I0930 10:23:17.370611   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a74a16e467338b2244bb2e40eb7c60cb62762cfdb8e283e14ccadc3cc2833486"
	I0930 10:23:17.420804   11632 logs.go:123] Gathering logs for kube-scheduler [9f19b165e1ee88685c466320c590be992ef4ee96f4de3b21a9bc9822a53d0535] ...
	I0930 10:23:17.420833   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f19b165e1ee88685c466320c590be992ef4ee96f4de3b21a9bc9822a53d0535"
	I0930 10:23:17.480757   11632 logs.go:123] Gathering logs for kube-controller-manager [97032ac69af1ce8c88a837f83e156a405e8a3051e5534676f90e35c468ec4c7c] ...
	I0930 10:23:17.480787   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97032ac69af1ce8c88a837f83e156a405e8a3051e5534676f90e35c468ec4c7c"
	I0930 10:23:17.559189   11632 logs.go:123] Gathering logs for container status ...
	I0930 10:23:17.559234   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 10:23:17.621668   11632 logs.go:123] Gathering logs for kubelet ...
	I0930 10:23:17.621697   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0930 10:23:17.680680   11632 logs.go:138] Found kubelet problem: Sep 30 10:21:30 addons-967811 kubelet[1213]: W0930 10:21:30.846407    1213 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-967811" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-967811' and this object
	W0930 10:23:17.680859   11632 logs.go:138] Found kubelet problem: Sep 30 10:21:30 addons-967811 kubelet[1213]: E0930 10:21:30.846466    1213 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-967811\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-967811' and this object" logger="UnhandledError"
	W0930 10:23:17.687047   11632 logs.go:138] Found kubelet problem: Sep 30 10:21:37 addons-967811 kubelet[1213]: W0930 10:21:37.918989    1213 reflector.go:561] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-967811" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-967811' and this object
	W0930 10:23:17.687217   11632 logs.go:138] Found kubelet problem: Sep 30 10:21:37 addons-967811 kubelet[1213]: E0930 10:21:37.919030    1213 reflector.go:158] "Unhandled Error" err="object-\"gadget\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-967811\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gadget\": no relationship found between node 'addons-967811' and this object" logger="UnhandledError"
	I0930 10:23:17.714218   11632 logs.go:123] Gathering logs for etcd [43d1e8f87adc2e8166f34b850ac4d8aef01390f0f4d728377fc4acb221bb4d8a] ...
	I0930 10:23:17.714246   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43d1e8f87adc2e8166f34b850ac4d8aef01390f0f4d728377fc4acb221bb4d8a"
	I0930 10:23:17.785766   11632 logs.go:123] Gathering logs for kube-proxy [0fcbc0b59bd98544aec10dbb57ea4f1aa9462224e7e89d089b2f1daa88ca09c6] ...
	I0930 10:23:17.785802   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0fcbc0b59bd98544aec10dbb57ea4f1aa9462224e7e89d089b2f1daa88ca09c6"
	I0930 10:23:17.822986   11632 logs.go:123] Gathering logs for CRI-O ...
	I0930 10:23:17.823010   11632 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-linux-amd64 start -p addons-967811 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns failed: signal: killed
--- FAIL: TestAddons/Setup (2400.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 node stop m02 -v=7 --alsologtostderr
E0930 11:15:59.041066   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:40.002973   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-033260 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.477290086s)

                                                
                                                
-- stdout --
	* Stopping node "ha-033260-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:15:55.698692   31004 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:15:55.698859   31004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:15:55.698872   31004 out.go:358] Setting ErrFile to fd 2...
	I0930 11:15:55.698879   31004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:15:55.699187   31004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:15:55.699535   31004 mustload.go:65] Loading cluster: ha-033260
	I0930 11:15:55.700097   31004 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:15:55.700116   31004 stop.go:39] StopHost: ha-033260-m02
	I0930 11:15:55.700500   31004 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:15:55.700565   31004 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:15:55.716326   31004 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40471
	I0930 11:15:55.716766   31004 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:15:55.717400   31004 main.go:141] libmachine: Using API Version  1
	I0930 11:15:55.717423   31004 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:15:55.717780   31004 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:15:55.720211   31004 out.go:177] * Stopping node "ha-033260-m02"  ...
	I0930 11:15:55.721423   31004 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 11:15:55.721464   31004 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:15:55.721750   31004 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 11:15:55.721797   31004 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:15:55.724658   31004 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:15:55.725107   31004 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:15:55.725132   31004 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:15:55.725295   31004 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:15:55.725459   31004 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:15:55.725599   31004 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:15:55.725824   31004 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:15:55.815614   31004 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 11:15:55.871550   31004 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 11:15:55.927217   31004 main.go:141] libmachine: Stopping "ha-033260-m02"...
	I0930 11:15:55.927248   31004 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:15:55.928649   31004 main.go:141] libmachine: (ha-033260-m02) Calling .Stop
	I0930 11:15:55.932252   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 0/120
	I0930 11:15:56.933711   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 1/120
	I0930 11:15:57.935017   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 2/120
	I0930 11:15:58.936632   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 3/120
	I0930 11:15:59.937895   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 4/120
	I0930 11:16:00.940124   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 5/120
	I0930 11:16:01.941425   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 6/120
	I0930 11:16:02.942760   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 7/120
	I0930 11:16:03.944332   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 8/120
	I0930 11:16:04.945873   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 9/120
	I0930 11:16:05.948316   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 10/120
	I0930 11:16:06.950392   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 11/120
	I0930 11:16:07.952200   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 12/120
	I0930 11:16:08.953532   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 13/120
	I0930 11:16:09.955026   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 14/120
	I0930 11:16:10.957012   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 15/120
	I0930 11:16:11.958278   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 16/120
	I0930 11:16:12.960214   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 17/120
	I0930 11:16:13.961662   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 18/120
	I0930 11:16:14.963270   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 19/120
	I0930 11:16:15.965153   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 20/120
	I0930 11:16:16.966475   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 21/120
	I0930 11:16:17.967919   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 22/120
	I0930 11:16:18.969652   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 23/120
	I0930 11:16:19.971414   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 24/120
	I0930 11:16:20.973586   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 25/120
	I0930 11:16:21.975323   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 26/120
	I0930 11:16:22.976954   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 27/120
	I0930 11:16:23.978353   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 28/120
	I0930 11:16:24.980248   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 29/120
	I0930 11:16:25.982876   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 30/120
	I0930 11:16:26.984346   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 31/120
	I0930 11:16:27.985716   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 32/120
	I0930 11:16:28.987715   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 33/120
	I0930 11:16:29.989183   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 34/120
	I0930 11:16:30.990755   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 35/120
	I0930 11:16:31.992048   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 36/120
	I0930 11:16:32.993338   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 37/120
	I0930 11:16:33.994746   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 38/120
	I0930 11:16:34.996145   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 39/120
	I0930 11:16:35.997544   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 40/120
	I0930 11:16:36.998860   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 41/120
	I0930 11:16:38.000226   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 42/120
	I0930 11:16:39.001659   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 43/120
	I0930 11:16:40.003187   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 44/120
	I0930 11:16:41.005099   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 45/120
	I0930 11:16:42.006622   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 46/120
	I0930 11:16:43.008107   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 47/120
	I0930 11:16:44.009508   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 48/120
	I0930 11:16:45.011040   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 49/120
	I0930 11:16:46.013129   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 50/120
	I0930 11:16:47.015103   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 51/120
	I0930 11:16:48.016336   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 52/120
	I0930 11:16:49.017790   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 53/120
	I0930 11:16:50.019129   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 54/120
	I0930 11:16:51.021209   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 55/120
	I0930 11:16:52.022791   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 56/120
	I0930 11:16:53.024341   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 57/120
	I0930 11:16:54.025807   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 58/120
	I0930 11:16:55.027920   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 59/120
	I0930 11:16:56.029265   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 60/120
	I0930 11:16:57.030638   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 61/120
	I0930 11:16:58.031969   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 62/120
	I0930 11:16:59.033354   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 63/120
	I0930 11:17:00.035149   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 64/120
	I0930 11:17:01.037067   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 65/120
	I0930 11:17:02.038772   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 66/120
	I0930 11:17:03.040300   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 67/120
	I0930 11:17:04.041816   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 68/120
	I0930 11:17:05.044163   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 69/120
	I0930 11:17:06.046439   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 70/120
	I0930 11:17:07.047819   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 71/120
	I0930 11:17:08.049308   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 72/120
	I0930 11:17:09.051746   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 73/120
	I0930 11:17:10.053630   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 74/120
	I0930 11:17:11.055431   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 75/120
	I0930 11:17:12.057541   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 76/120
	I0930 11:17:13.059884   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 77/120
	I0930 11:17:14.061186   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 78/120
	I0930 11:17:15.062795   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 79/120
	I0930 11:17:16.065071   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 80/120
	I0930 11:17:17.066822   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 81/120
	I0930 11:17:18.068204   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 82/120
	I0930 11:17:19.069733   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 83/120
	I0930 11:17:20.071100   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 84/120
	I0930 11:17:21.073480   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 85/120
	I0930 11:17:22.075658   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 86/120
	I0930 11:17:23.077182   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 87/120
	I0930 11:17:24.078719   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 88/120
	I0930 11:17:25.080155   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 89/120
	I0930 11:17:26.082625   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 90/120
	I0930 11:17:27.084412   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 91/120
	I0930 11:17:28.086586   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 92/120
	I0930 11:17:29.088208   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 93/120
	I0930 11:17:30.089469   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 94/120
	I0930 11:17:31.091411   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 95/120
	I0930 11:17:32.093536   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 96/120
	I0930 11:17:33.094938   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 97/120
	I0930 11:17:34.096250   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 98/120
	I0930 11:17:35.097503   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 99/120
	I0930 11:17:36.099774   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 100/120
	I0930 11:17:37.101249   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 101/120
	I0930 11:17:38.102637   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 102/120
	I0930 11:17:39.104103   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 103/120
	I0930 11:17:40.105583   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 104/120
	I0930 11:17:41.107936   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 105/120
	I0930 11:17:42.109203   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 106/120
	I0930 11:17:43.110716   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 107/120
	I0930 11:17:44.112147   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 108/120
	I0930 11:17:45.113542   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 109/120
	I0930 11:17:46.115808   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 110/120
	I0930 11:17:47.117340   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 111/120
	I0930 11:17:48.118920   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 112/120
	I0930 11:17:49.120179   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 113/120
	I0930 11:17:50.122122   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 114/120
	I0930 11:17:51.124123   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 115/120
	I0930 11:17:52.125726   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 116/120
	I0930 11:17:53.126967   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 117/120
	I0930 11:17:54.128635   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 118/120
	I0930 11:17:55.130387   31004 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 119/120
	I0930 11:17:56.131316   31004 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 11:17:56.131553   31004 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-033260 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
E0930 11:18:01.925928   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr: (18.750388806s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.418895356s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m03_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:11:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:11:16.968147   26946 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:11:16.968259   26946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:11:16.968268   26946 out.go:358] Setting ErrFile to fd 2...
	I0930 11:11:16.968272   26946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:11:16.968475   26946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:11:16.969014   26946 out.go:352] Setting JSON to false
	I0930 11:11:16.969874   26946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3224,"bootTime":1727691453,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:11:16.969971   26946 start.go:139] virtualization: kvm guest
	I0930 11:11:16.972340   26946 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:11:16.973700   26946 notify.go:220] Checking for updates...
	I0930 11:11:16.973712   26946 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:11:16.975164   26946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:11:16.976567   26946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:11:16.977791   26946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:16.978971   26946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:11:16.980151   26946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:11:16.981437   26946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:11:17.016837   26946 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 11:11:17.017911   26946 start.go:297] selected driver: kvm2
	I0930 11:11:17.017921   26946 start.go:901] validating driver "kvm2" against <nil>
	I0930 11:11:17.017932   26946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:11:17.018657   26946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:11:17.018742   26946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:11:17.034306   26946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:11:17.034349   26946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 11:11:17.034586   26946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:11:17.034614   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:11:17.034651   26946 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 11:11:17.034662   26946 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 11:11:17.034717   26946 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0930 11:11:17.034818   26946 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:11:17.036732   26946 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:11:17.037780   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:11:17.037816   26946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:11:17.037823   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:11:17.037892   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:11:17.037903   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:11:17.038215   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:11:17.038236   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json: {Name:mkb40a3a18f0ab7d52c306f0204aa0e145307acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:17.038367   26946 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:11:17.038394   26946 start.go:364] duration metric: took 15.009µs to acquireMachinesLock for "ha-033260"
	I0930 11:11:17.038414   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:11:17.038466   26946 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 11:11:17.039863   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:11:17.039975   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:11:17.040024   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:11:17.054681   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0930 11:11:17.055106   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:11:17.055654   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:11:17.055673   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:11:17.056010   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:11:17.056264   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:17.056403   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:17.056571   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:11:17.056596   26946 client.go:168] LocalClient.Create starting
	I0930 11:11:17.056623   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:11:17.056664   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:11:17.056676   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:11:17.056725   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:11:17.056743   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:11:17.056752   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:11:17.056765   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:11:17.056773   26946 main.go:141] libmachine: (ha-033260) Calling .PreCreateCheck
	I0930 11:11:17.057093   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:17.057527   26946 main.go:141] libmachine: Creating machine...
	I0930 11:11:17.057540   26946 main.go:141] libmachine: (ha-033260) Calling .Create
	I0930 11:11:17.057672   26946 main.go:141] libmachine: (ha-033260) Creating KVM machine...
	I0930 11:11:17.058923   26946 main.go:141] libmachine: (ha-033260) DBG | found existing default KVM network
	I0930 11:11:17.059559   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.059428   26970 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I0930 11:11:17.059596   26946 main.go:141] libmachine: (ha-033260) DBG | created network xml: 
	I0930 11:11:17.059615   26946 main.go:141] libmachine: (ha-033260) DBG | <network>
	I0930 11:11:17.059621   26946 main.go:141] libmachine: (ha-033260) DBG |   <name>mk-ha-033260</name>
	I0930 11:11:17.059629   26946 main.go:141] libmachine: (ha-033260) DBG |   <dns enable='no'/>
	I0930 11:11:17.059635   26946 main.go:141] libmachine: (ha-033260) DBG |   
	I0930 11:11:17.059640   26946 main.go:141] libmachine: (ha-033260) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 11:11:17.059646   26946 main.go:141] libmachine: (ha-033260) DBG |     <dhcp>
	I0930 11:11:17.059651   26946 main.go:141] libmachine: (ha-033260) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 11:11:17.059658   26946 main.go:141] libmachine: (ha-033260) DBG |     </dhcp>
	I0930 11:11:17.059663   26946 main.go:141] libmachine: (ha-033260) DBG |   </ip>
	I0930 11:11:17.059667   26946 main.go:141] libmachine: (ha-033260) DBG |   
	I0930 11:11:17.059673   26946 main.go:141] libmachine: (ha-033260) DBG | </network>
	I0930 11:11:17.059679   26946 main.go:141] libmachine: (ha-033260) DBG | 
	I0930 11:11:17.064624   26946 main.go:141] libmachine: (ha-033260) DBG | trying to create private KVM network mk-ha-033260 192.168.39.0/24...
	I0930 11:11:17.128145   26946 main.go:141] libmachine: (ha-033260) DBG | private KVM network mk-ha-033260 192.168.39.0/24 created
	I0930 11:11:17.128172   26946 main.go:141] libmachine: (ha-033260) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 ...
	I0930 11:11:17.128183   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.128100   26970 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:17.128201   26946 main.go:141] libmachine: (ha-033260) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:11:17.128218   26946 main.go:141] libmachine: (ha-033260) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:11:17.365994   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.365804   26970 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa...
	I0930 11:11:17.493008   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.492862   26970 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/ha-033260.rawdisk...
	I0930 11:11:17.493034   26946 main.go:141] libmachine: (ha-033260) DBG | Writing magic tar header
	I0930 11:11:17.493046   26946 main.go:141] libmachine: (ha-033260) DBG | Writing SSH key tar header
	I0930 11:11:17.493053   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.492975   26970 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 ...
	I0930 11:11:17.493066   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260
	I0930 11:11:17.493124   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 (perms=drwx------)
	I0930 11:11:17.493158   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:11:17.493173   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:11:17.493181   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:11:17.493193   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:11:17.493202   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:11:17.493226   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:17.493246   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:11:17.493258   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:11:17.493264   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:11:17.493275   26946 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:11:17.493280   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:11:17.493286   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home
	I0930 11:11:17.493291   26946 main.go:141] libmachine: (ha-033260) DBG | Skipping /home - not owner
	I0930 11:11:17.494319   26946 main.go:141] libmachine: (ha-033260) define libvirt domain using xml: 
	I0930 11:11:17.494340   26946 main.go:141] libmachine: (ha-033260) <domain type='kvm'>
	I0930 11:11:17.494347   26946 main.go:141] libmachine: (ha-033260)   <name>ha-033260</name>
	I0930 11:11:17.494351   26946 main.go:141] libmachine: (ha-033260)   <memory unit='MiB'>2200</memory>
	I0930 11:11:17.494356   26946 main.go:141] libmachine: (ha-033260)   <vcpu>2</vcpu>
	I0930 11:11:17.494359   26946 main.go:141] libmachine: (ha-033260)   <features>
	I0930 11:11:17.494365   26946 main.go:141] libmachine: (ha-033260)     <acpi/>
	I0930 11:11:17.494370   26946 main.go:141] libmachine: (ha-033260)     <apic/>
	I0930 11:11:17.494377   26946 main.go:141] libmachine: (ha-033260)     <pae/>
	I0930 11:11:17.494399   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494410   26946 main.go:141] libmachine: (ha-033260)   </features>
	I0930 11:11:17.494415   26946 main.go:141] libmachine: (ha-033260)   <cpu mode='host-passthrough'>
	I0930 11:11:17.494422   26946 main.go:141] libmachine: (ha-033260)   
	I0930 11:11:17.494425   26946 main.go:141] libmachine: (ha-033260)   </cpu>
	I0930 11:11:17.494429   26946 main.go:141] libmachine: (ha-033260)   <os>
	I0930 11:11:17.494433   26946 main.go:141] libmachine: (ha-033260)     <type>hvm</type>
	I0930 11:11:17.494461   26946 main.go:141] libmachine: (ha-033260)     <boot dev='cdrom'/>
	I0930 11:11:17.494487   26946 main.go:141] libmachine: (ha-033260)     <boot dev='hd'/>
	I0930 11:11:17.494498   26946 main.go:141] libmachine: (ha-033260)     <bootmenu enable='no'/>
	I0930 11:11:17.494504   26946 main.go:141] libmachine: (ha-033260)   </os>
	I0930 11:11:17.494511   26946 main.go:141] libmachine: (ha-033260)   <devices>
	I0930 11:11:17.494518   26946 main.go:141] libmachine: (ha-033260)     <disk type='file' device='cdrom'>
	I0930 11:11:17.494529   26946 main.go:141] libmachine: (ha-033260)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/boot2docker.iso'/>
	I0930 11:11:17.494540   26946 main.go:141] libmachine: (ha-033260)       <target dev='hdc' bus='scsi'/>
	I0930 11:11:17.494547   26946 main.go:141] libmachine: (ha-033260)       <readonly/>
	I0930 11:11:17.494558   26946 main.go:141] libmachine: (ha-033260)     </disk>
	I0930 11:11:17.494568   26946 main.go:141] libmachine: (ha-033260)     <disk type='file' device='disk'>
	I0930 11:11:17.494579   26946 main.go:141] libmachine: (ha-033260)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:11:17.494592   26946 main.go:141] libmachine: (ha-033260)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/ha-033260.rawdisk'/>
	I0930 11:11:17.494603   26946 main.go:141] libmachine: (ha-033260)       <target dev='hda' bus='virtio'/>
	I0930 11:11:17.494611   26946 main.go:141] libmachine: (ha-033260)     </disk>
	I0930 11:11:17.494625   26946 main.go:141] libmachine: (ha-033260)     <interface type='network'>
	I0930 11:11:17.494636   26946 main.go:141] libmachine: (ha-033260)       <source network='mk-ha-033260'/>
	I0930 11:11:17.494646   26946 main.go:141] libmachine: (ha-033260)       <model type='virtio'/>
	I0930 11:11:17.494655   26946 main.go:141] libmachine: (ha-033260)     </interface>
	I0930 11:11:17.494664   26946 main.go:141] libmachine: (ha-033260)     <interface type='network'>
	I0930 11:11:17.494672   26946 main.go:141] libmachine: (ha-033260)       <source network='default'/>
	I0930 11:11:17.494682   26946 main.go:141] libmachine: (ha-033260)       <model type='virtio'/>
	I0930 11:11:17.494731   26946 main.go:141] libmachine: (ha-033260)     </interface>
	I0930 11:11:17.494748   26946 main.go:141] libmachine: (ha-033260)     <serial type='pty'>
	I0930 11:11:17.494754   26946 main.go:141] libmachine: (ha-033260)       <target port='0'/>
	I0930 11:11:17.494763   26946 main.go:141] libmachine: (ha-033260)     </serial>
	I0930 11:11:17.494791   26946 main.go:141] libmachine: (ha-033260)     <console type='pty'>
	I0930 11:11:17.494813   26946 main.go:141] libmachine: (ha-033260)       <target type='serial' port='0'/>
	I0930 11:11:17.494833   26946 main.go:141] libmachine: (ha-033260)     </console>
	I0930 11:11:17.494851   26946 main.go:141] libmachine: (ha-033260)     <rng model='virtio'>
	I0930 11:11:17.494868   26946 main.go:141] libmachine: (ha-033260)       <backend model='random'>/dev/random</backend>
	I0930 11:11:17.494879   26946 main.go:141] libmachine: (ha-033260)     </rng>
	I0930 11:11:17.494884   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494894   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494900   26946 main.go:141] libmachine: (ha-033260)   </devices>
	I0930 11:11:17.494910   26946 main.go:141] libmachine: (ha-033260) </domain>
	I0930 11:11:17.494919   26946 main.go:141] libmachine: (ha-033260) 
	I0930 11:11:17.499284   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:1e:fd:d9 in network default
	I0930 11:11:17.499904   26946 main.go:141] libmachine: (ha-033260) Ensuring networks are active...
	I0930 11:11:17.499920   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:17.500618   26946 main.go:141] libmachine: (ha-033260) Ensuring network default is active
	I0930 11:11:17.501042   26946 main.go:141] libmachine: (ha-033260) Ensuring network mk-ha-033260 is active
	I0930 11:11:17.501643   26946 main.go:141] libmachine: (ha-033260) Getting domain xml...
	I0930 11:11:17.502369   26946 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:11:18.692089   26946 main.go:141] libmachine: (ha-033260) Waiting to get IP...
	I0930 11:11:18.692860   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:18.693297   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:18.693313   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:18.693260   26970 retry.go:31] will retry after 231.51107ms: waiting for machine to come up
	I0930 11:11:18.926878   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:18.927339   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:18.927367   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:18.927281   26970 retry.go:31] will retry after 238.29389ms: waiting for machine to come up
	I0930 11:11:19.167097   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.167813   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.167841   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.167759   26970 retry.go:31] will retry after 304.46036ms: waiting for machine to come up
	I0930 11:11:19.474179   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.474648   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.474678   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.474604   26970 retry.go:31] will retry after 472.499674ms: waiting for machine to come up
	I0930 11:11:19.948108   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.948622   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.948649   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.948597   26970 retry.go:31] will retry after 645.07677ms: waiting for machine to come up
	I0930 11:11:20.595504   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:20.595963   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:20.595984   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:20.595941   26970 retry.go:31] will retry after 894.966176ms: waiting for machine to come up
	I0930 11:11:21.492428   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:21.492831   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:21.492882   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:21.492814   26970 retry.go:31] will retry after 848.859093ms: waiting for machine to come up
	I0930 11:11:22.343403   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:22.343835   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:22.343861   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:22.343753   26970 retry.go:31] will retry after 1.05973931s: waiting for machine to come up
	I0930 11:11:23.404961   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:23.405359   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:23.405385   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:23.405316   26970 retry.go:31] will retry after 1.638432323s: waiting for machine to come up
	I0930 11:11:25.046055   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:25.046452   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:25.046477   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:25.046405   26970 retry.go:31] will retry after 2.080958051s: waiting for machine to come up
	I0930 11:11:27.128708   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:27.129133   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:27.129156   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:27.129053   26970 retry.go:31] will retry after 2.256414995s: waiting for machine to come up
	I0930 11:11:29.387356   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:29.387768   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:29.387788   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:29.387745   26970 retry.go:31] will retry after 3.372456281s: waiting for machine to come up
	I0930 11:11:32.761875   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:32.762235   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:32.762254   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:32.762202   26970 retry.go:31] will retry after 3.757571385s: waiting for machine to come up
	I0930 11:11:36.524130   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:36.524597   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:36.524613   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:36.524548   26970 retry.go:31] will retry after 4.081097536s: waiting for machine to come up
	I0930 11:11:40.609929   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.610386   26946 main.go:141] libmachine: (ha-033260) Found IP for machine: 192.168.39.249
	I0930 11:11:40.610415   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has current primary IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.610423   26946 main.go:141] libmachine: (ha-033260) Reserving static IP address...
	I0930 11:11:40.610796   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"} in network mk-ha-033260
	I0930 11:11:40.682058   26946 main.go:141] libmachine: (ha-033260) DBG | Getting to WaitForSSH function...
	I0930 11:11:40.682112   26946 main.go:141] libmachine: (ha-033260) Reserved static IP address: 192.168.39.249
	I0930 11:11:40.682151   26946 main.go:141] libmachine: (ha-033260) Waiting for SSH to be available...
	I0930 11:11:40.684625   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.684964   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.684990   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.685088   26946 main.go:141] libmachine: (ha-033260) DBG | Using SSH client type: external
	I0930 11:11:40.685108   26946 main.go:141] libmachine: (ha-033260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa (-rw-------)
	I0930 11:11:40.685155   26946 main.go:141] libmachine: (ha-033260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:11:40.685168   26946 main.go:141] libmachine: (ha-033260) DBG | About to run SSH command:
	I0930 11:11:40.685196   26946 main.go:141] libmachine: (ha-033260) DBG | exit 0
	I0930 11:11:40.813832   26946 main.go:141] libmachine: (ha-033260) DBG | SSH cmd err, output: <nil>: 
	I0930 11:11:40.814089   26946 main.go:141] libmachine: (ha-033260) KVM machine creation complete!
	I0930 11:11:40.814483   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:40.815001   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:40.815218   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:40.815362   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:11:40.815373   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:11:40.816691   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:11:40.816703   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:11:40.816707   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:11:40.816712   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:40.818838   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.819210   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.819240   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.819306   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:40.819465   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.819601   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.819739   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:40.819883   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:40.820061   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:40.820071   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:11:40.929008   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:11:40.929033   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:11:40.929040   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:40.931913   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.932264   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.932308   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.932448   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:40.932679   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.932816   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.932931   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:40.933122   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:40.933283   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:40.933295   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:11:41.042597   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:11:41.042675   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:11:41.042682   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:11:41.042689   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.042906   26946 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:11:41.042918   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.043088   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.045281   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.045591   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.045634   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.045749   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.045916   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.046048   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.046166   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.046324   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.046537   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.046554   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:11:41.173460   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:11:41.173489   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.176142   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.176483   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.176513   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.176659   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.176845   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.176984   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.177110   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.177285   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.177443   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.177458   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:11:41.295471   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:11:41.295501   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:11:41.295523   26946 buildroot.go:174] setting up certificates
	I0930 11:11:41.295535   26946 provision.go:84] configureAuth start
	I0930 11:11:41.295560   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.295824   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:41.298508   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.298844   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.298871   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.299011   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.301187   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.301504   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.301529   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.301674   26946 provision.go:143] copyHostCerts
	I0930 11:11:41.301701   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:11:41.301735   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:11:41.301744   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:11:41.301807   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:11:41.301895   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:11:41.301913   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:11:41.301919   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:11:41.301944   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:11:41.301997   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:11:41.302013   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:11:41.302019   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:11:41.302039   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:11:41.302094   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:11:41.595618   26946 provision.go:177] copyRemoteCerts
	I0930 11:11:41.595675   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:11:41.595700   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.598644   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.599092   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.599122   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.599308   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.599628   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.599809   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.599990   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:41.686253   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:11:41.686348   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:11:41.716396   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:11:41.716470   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:11:41.741350   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:11:41.741426   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:11:41.765879   26946 provision.go:87] duration metric: took 470.33102ms to configureAuth
	I0930 11:11:41.765904   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:11:41.766073   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:11:41.766153   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.768846   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.769139   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.769163   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.769350   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.769573   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.769751   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.769867   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.770004   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.770154   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.770171   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:11:41.997580   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:11:41.997603   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:11:41.997612   26946 main.go:141] libmachine: (ha-033260) Calling .GetURL
	I0930 11:11:41.998809   26946 main.go:141] libmachine: (ha-033260) DBG | Using libvirt version 6000000
	I0930 11:11:42.000992   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.001367   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.001403   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.001552   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:11:42.001574   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:11:42.001580   26946 client.go:171] duration metric: took 24.944976164s to LocalClient.Create
	I0930 11:11:42.001599   26946 start.go:167] duration metric: took 24.945029476s to libmachine.API.Create "ha-033260"
	I0930 11:11:42.001605   26946 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:11:42.001634   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:11:42.001658   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.001903   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:11:42.001928   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.004137   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.004477   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.004506   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.004626   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.004785   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.004929   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.005073   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.088764   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:11:42.093605   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:11:42.093649   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:11:42.093718   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:11:42.093798   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:11:42.093808   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:11:42.093909   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:11:42.104383   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:11:42.133090   26946 start.go:296] duration metric: took 131.471881ms for postStartSetup
	I0930 11:11:42.133135   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:42.133732   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:42.136141   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.136473   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.136492   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.136788   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:11:42.136956   26946 start.go:128] duration metric: took 25.09848122s to createHost
	I0930 11:11:42.136975   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.139440   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.139825   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.139853   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.139989   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.140175   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.140334   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.140446   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.140582   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:42.140793   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:42.140810   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:11:42.250567   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694702.228135172
	
	I0930 11:11:42.250590   26946 fix.go:216] guest clock: 1727694702.228135172
	I0930 11:11:42.250600   26946 fix.go:229] Guest: 2024-09-30 11:11:42.228135172 +0000 UTC Remote: 2024-09-30 11:11:42.136966335 +0000 UTC m=+25.202018114 (delta=91.168837ms)
	I0930 11:11:42.250654   26946 fix.go:200] guest clock delta is within tolerance: 91.168837ms
	I0930 11:11:42.250662   26946 start.go:83] releasing machines lock for "ha-033260", held for 25.21225918s
	I0930 11:11:42.250689   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.250959   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:42.253937   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.254263   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.254291   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.254395   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.254873   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.255071   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.255171   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:11:42.255230   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.255277   26946 ssh_runner.go:195] Run: cat /version.json
	I0930 11:11:42.255305   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.257775   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258072   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258098   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.258117   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258247   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.258399   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.258499   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.258530   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258550   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.258636   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.258725   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.258782   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.258905   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.259023   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.338949   26946 ssh_runner.go:195] Run: systemctl --version
	I0930 11:11:42.367977   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:11:42.529658   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:11:42.535739   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:11:42.535805   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:11:42.553004   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:11:42.553029   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:11:42.553101   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:11:42.571333   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:11:42.586474   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:11:42.586529   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:11:42.600562   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:11:42.614592   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:11:42.724714   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:11:42.863957   26946 docker.go:233] disabling docker service ...
	I0930 11:11:42.864016   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:11:42.878829   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:11:42.892519   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:11:43.031759   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:11:43.156228   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:11:43.171439   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:11:43.190694   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:11:43.190806   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.201572   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:11:43.201660   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.212771   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.224198   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.235643   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:11:43.247521   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.258652   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.276825   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.288336   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:11:43.299367   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:11:43.299422   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:11:43.314057   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:11:43.324403   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:11:43.446606   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:11:43.543986   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:11:43.544064   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:11:43.548794   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:11:43.548857   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:11:43.552827   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:11:43.593000   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:11:43.593096   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:11:43.624593   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:11:43.654845   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:11:43.656217   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:43.658636   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:43.658956   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:43.658982   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:43.659236   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:11:43.663528   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:11:43.677810   26946 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:11:43.677905   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:11:43.677950   26946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:11:43.712140   26946 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 11:11:43.712231   26946 ssh_runner.go:195] Run: which lz4
	I0930 11:11:43.716210   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 11:11:43.716286   26946 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:11:43.720372   26946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:11:43.720397   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 11:11:45.117936   26946 crio.go:462] duration metric: took 1.401668541s to copy over tarball
	I0930 11:11:45.118009   26946 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:11:47.123971   26946 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.00593624s)
	I0930 11:11:47.124002   26946 crio.go:469] duration metric: took 2.006037646s to extract the tarball
	I0930 11:11:47.124011   26946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:11:47.161484   26946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:11:47.208444   26946 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:11:47.208468   26946 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:11:47.208475   26946 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:11:47.208561   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:11:47.208632   26946 ssh_runner.go:195] Run: crio config
	I0930 11:11:47.256652   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:11:47.256671   26946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 11:11:47.256679   26946 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:11:47.256700   26946 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:11:47.256808   26946 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:11:47.256829   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:11:47.256866   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:11:47.273274   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:11:47.273411   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:11:47.273489   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:11:47.284468   26946 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:11:47.284546   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:11:47.295086   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:11:47.313062   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:11:47.330490   26946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:11:47.348148   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0930 11:11:47.364645   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:11:47.368788   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:11:47.381517   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:11:47.516902   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:11:47.535500   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:11:47.535531   26946 certs.go:194] generating shared ca certs ...
	I0930 11:11:47.535554   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.535745   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:11:47.535819   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:11:47.535836   26946 certs.go:256] generating profile certs ...
	I0930 11:11:47.535916   26946 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:11:47.535947   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt with IP's: []
	I0930 11:11:47.718587   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt ...
	I0930 11:11:47.718617   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt: {Name:mkef0c2b538ff6ec90e4096f6b30d2cc62a0498b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.718785   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key ...
	I0930 11:11:47.718795   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key: {Name:mk0bf4d552829907727733b9f23a1e78046426c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.718864   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf
	I0930 11:11:47.718878   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.254]
	I0930 11:11:47.993565   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf ...
	I0930 11:11:47.993602   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf: {Name:mk8d827ffc338aba548bc3df464e9e04ae838b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.993789   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf ...
	I0930 11:11:47.993807   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf: {Name:mka275015927a8ca9f533558d637ec2560f5b41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.993887   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:11:47.993965   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:11:47.994041   26946 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:11:47.994059   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt with IP's: []
	I0930 11:11:48.098988   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt ...
	I0930 11:11:48.099020   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt: {Name:mk7106fd4af523e8a328dae6580fd1ecc34c18b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:48.099178   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key ...
	I0930 11:11:48.099189   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key: {Name:mka3dbe7128ec5d469ec7906155af8e6e7cc2725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:48.099265   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:11:48.099283   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:11:48.099294   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:11:48.099304   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:11:48.099314   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:11:48.099324   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:11:48.099333   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:11:48.099342   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:11:48.099385   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:11:48.099425   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:11:48.099434   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:11:48.099457   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:11:48.099481   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:11:48.099502   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:11:48.099537   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:11:48.099561   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.099574   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.099592   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.100091   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:11:48.126879   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:11:48.153722   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:11:48.179797   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:11:48.205074   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 11:11:48.230272   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:11:48.255030   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:11:48.279850   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:11:48.306723   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:11:48.332995   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:11:48.363646   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:11:48.392223   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:11:48.410336   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:11:48.416506   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:11:48.428642   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.433601   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.433673   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.439817   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:11:48.451918   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:11:48.464282   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.469211   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.469276   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.475319   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:11:48.487558   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:11:48.500151   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.505278   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.505355   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.511924   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:11:48.525201   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:11:48.529960   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:11:48.530014   26946 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:11:48.530081   26946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:11:48.530129   26946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:11:48.568913   26946 cri.go:89] found id: ""
	I0930 11:11:48.568975   26946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:11:48.580292   26946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 11:11:48.593494   26946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 11:11:48.606006   26946 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 11:11:48.606037   26946 kubeadm.go:157] found existing configuration files:
	
	I0930 11:11:48.606079   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 11:11:48.615784   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 11:11:48.615855   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 11:11:48.626018   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 11:11:48.635953   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 11:11:48.636032   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 11:11:48.646292   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 11:11:48.657605   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 11:11:48.657679   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 11:11:48.669154   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 11:11:48.680279   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 11:11:48.680348   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 11:11:48.691798   26946 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 11:11:48.797903   26946 kubeadm.go:310] W0930 11:11:48.782166     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 11:11:48.798931   26946 kubeadm.go:310] W0930 11:11:48.783291     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 11:11:48.907657   26946 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 11:12:00.116285   26946 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 11:12:00.116363   26946 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 11:12:00.116459   26946 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 11:12:00.116597   26946 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 11:12:00.116728   26946 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 11:12:00.116817   26946 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 11:12:00.118253   26946 out.go:235]   - Generating certificates and keys ...
	I0930 11:12:00.118344   26946 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 11:12:00.118441   26946 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 11:12:00.118536   26946 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 11:12:00.118621   26946 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 11:12:00.118710   26946 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 11:12:00.118780   26946 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 11:12:00.118849   26946 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 11:12:00.118971   26946 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-033260 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0930 11:12:00.119022   26946 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 11:12:00.119113   26946 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-033260 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0930 11:12:00.119209   26946 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 11:12:00.119261   26946 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 11:12:00.119300   26946 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 11:12:00.119361   26946 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 11:12:00.119418   26946 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 11:12:00.119463   26946 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 11:12:00.119517   26946 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 11:12:00.119604   26946 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 11:12:00.119657   26946 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 11:12:00.119721   26946 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 11:12:00.119813   26946 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 11:12:00.121972   26946 out.go:235]   - Booting up control plane ...
	I0930 11:12:00.122077   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 11:12:00.122168   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 11:12:00.122257   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 11:12:00.122354   26946 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 11:12:00.122445   26946 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 11:12:00.122493   26946 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 11:12:00.122632   26946 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 11:12:00.122746   26946 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 11:12:00.122807   26946 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002277963s
	I0930 11:12:00.122866   26946 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 11:12:00.122914   26946 kubeadm.go:310] [api-check] The API server is healthy after 5.817139259s
	I0930 11:12:00.123017   26946 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 11:12:00.123126   26946 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 11:12:00.123189   26946 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 11:12:00.123373   26946 kubeadm.go:310] [mark-control-plane] Marking the node ha-033260 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 11:12:00.123455   26946 kubeadm.go:310] [bootstrap-token] Using token: mglnbr.4ysxjyfx6ulvufry
	I0930 11:12:00.124695   26946 out.go:235]   - Configuring RBAC rules ...
	I0930 11:12:00.124816   26946 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 11:12:00.124888   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 11:12:00.125008   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 11:12:00.125123   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 11:12:00.125226   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 11:12:00.125300   26946 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 11:12:00.125399   26946 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 11:12:00.125438   26946 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 11:12:00.125482   26946 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 11:12:00.125488   26946 kubeadm.go:310] 
	I0930 11:12:00.125543   26946 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 11:12:00.125548   26946 kubeadm.go:310] 
	I0930 11:12:00.125627   26946 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 11:12:00.125640   26946 kubeadm.go:310] 
	I0930 11:12:00.125667   26946 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 11:12:00.125722   26946 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 11:12:00.125765   26946 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 11:12:00.125771   26946 kubeadm.go:310] 
	I0930 11:12:00.125822   26946 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 11:12:00.125832   26946 kubeadm.go:310] 
	I0930 11:12:00.125875   26946 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 11:12:00.125882   26946 kubeadm.go:310] 
	I0930 11:12:00.125945   26946 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 11:12:00.126010   26946 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 11:12:00.126068   26946 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 11:12:00.126073   26946 kubeadm.go:310] 
	I0930 11:12:00.126141   26946 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 11:12:00.126212   26946 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 11:12:00.126219   26946 kubeadm.go:310] 
	I0930 11:12:00.126299   26946 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mglnbr.4ysxjyfx6ulvufry \
	I0930 11:12:00.126384   26946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 \
	I0930 11:12:00.126404   26946 kubeadm.go:310] 	--control-plane 
	I0930 11:12:00.126410   26946 kubeadm.go:310] 
	I0930 11:12:00.126493   26946 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 11:12:00.126501   26946 kubeadm.go:310] 
	I0930 11:12:00.126563   26946 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mglnbr.4ysxjyfx6ulvufry \
	I0930 11:12:00.126653   26946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 
	I0930 11:12:00.126666   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:12:00.126671   26946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 11:12:00.128070   26946 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 11:12:00.129234   26946 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 11:12:00.134944   26946 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 11:12:00.134960   26946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 11:12:00.155333   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 11:12:00.530346   26946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 11:12:00.530478   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260 minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=true
	I0930 11:12:00.530486   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:00.762071   26946 ops.go:34] apiserver oom_adj: -16
	I0930 11:12:00.762161   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:01.262836   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:01.762341   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:02.262939   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:02.762594   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.263292   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.762877   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.861166   26946 kubeadm.go:1113] duration metric: took 3.330735229s to wait for elevateKubeSystemPrivileges
	I0930 11:12:03.861207   26946 kubeadm.go:394] duration metric: took 15.331194175s to StartCluster
	I0930 11:12:03.861229   26946 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:03.861306   26946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:03.861899   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:03.862096   26946 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:03.862109   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 11:12:03.862128   26946 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:12:03.862180   26946 addons.go:69] Setting storage-provisioner=true in profile "ha-033260"
	I0930 11:12:03.862192   26946 addons.go:234] Setting addon storage-provisioner=true in "ha-033260"
	I0930 11:12:03.862117   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:12:03.862217   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:03.862220   26946 addons.go:69] Setting default-storageclass=true in profile "ha-033260"
	I0930 11:12:03.862242   26946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-033260"
	I0930 11:12:03.862318   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:03.862546   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.862579   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.862640   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.862674   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.878311   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0930 11:12:03.878524   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
	I0930 11:12:03.878793   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.878956   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.879296   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.879311   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.879437   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.879458   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.879666   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.879878   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.880063   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.880274   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.880317   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.882311   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:03.882615   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:12:03.883117   26946 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 11:12:03.883340   26946 addons.go:234] Setting addon default-storageclass=true in "ha-033260"
	I0930 11:12:03.883377   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:03.883734   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.883774   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.895612   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0930 11:12:03.896182   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.896686   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.896706   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.897041   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.897263   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.899125   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:03.899133   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
	I0930 11:12:03.899601   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.900021   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.900036   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.900378   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.901008   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.901054   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.901205   26946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:12:03.902407   26946 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:03.902428   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 11:12:03.902445   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:03.905497   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.906023   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:03.906045   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.906199   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:03.906396   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:03.906554   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:03.906702   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:03.917103   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0930 11:12:03.917557   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.918124   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.918149   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.918507   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.918675   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.920302   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:03.920506   26946 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:03.920522   26946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 11:12:03.920544   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:03.923151   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.923529   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:03.923552   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.923700   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:03.923867   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:03.923995   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:03.924108   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:03.981471   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 11:12:04.090970   26946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:04.120632   26946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:04.535542   26946 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 11:12:04.535597   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.535614   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.535906   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.535923   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.535937   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.535938   26946 main.go:141] libmachine: (ha-033260) DBG | Closing plugin on server side
	I0930 11:12:04.535945   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.536174   26946 main.go:141] libmachine: (ha-033260) DBG | Closing plugin on server side
	I0930 11:12:04.536192   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.536203   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.536265   26946 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:12:04.536288   26946 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:12:04.536378   26946 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0930 11:12:04.536387   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:04.536394   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:04.536397   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:04.616635   26946 round_trippers.go:574] Response Status: 200 OK in 80 milliseconds
	I0930 11:12:04.617143   26946 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0930 11:12:04.617157   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:04.617164   26946 round_trippers.go:473]     Content-Type: application/json
	I0930 11:12:04.617168   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:04.617171   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:04.644304   26946 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0930 11:12:04.644577   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.644596   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.644880   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.644899   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.839773   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.839805   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.840111   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.840131   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.840140   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.840149   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.840370   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.840384   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.841979   26946 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 11:12:04.843256   26946 addons.go:510] duration metric: took 981.127437ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 11:12:04.843295   26946 start.go:246] waiting for cluster config update ...
	I0930 11:12:04.843309   26946 start.go:255] writing updated cluster config ...
	I0930 11:12:04.844944   26946 out.go:201] 
	I0930 11:12:04.846458   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:04.846524   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:04.848060   26946 out.go:177] * Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	I0930 11:12:04.849158   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:12:04.849179   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:12:04.849280   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:12:04.849291   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:12:04.849355   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:04.849507   26946 start.go:360] acquireMachinesLock for ha-033260-m02: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:12:04.849551   26946 start.go:364] duration metric: took 26.46µs to acquireMachinesLock for "ha-033260-m02"
	I0930 11:12:04.849567   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:04.849642   26946 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0930 11:12:04.851226   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:12:04.851326   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:04.851360   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:04.866966   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0930 11:12:04.867433   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:04.867975   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:04.867995   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:04.868336   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:04.868557   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:04.868710   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:04.868858   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:12:04.868889   26946 client.go:168] LocalClient.Create starting
	I0930 11:12:04.868923   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:12:04.868957   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:12:04.868973   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:12:04.869023   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:12:04.869042   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:12:04.869052   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:12:04.869078   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:12:04.869093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .PreCreateCheck
	I0930 11:12:04.869253   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:04.869711   26946 main.go:141] libmachine: Creating machine...
	I0930 11:12:04.869724   26946 main.go:141] libmachine: (ha-033260-m02) Calling .Create
	I0930 11:12:04.869845   26946 main.go:141] libmachine: (ha-033260-m02) Creating KVM machine...
	I0930 11:12:04.871091   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found existing default KVM network
	I0930 11:12:04.871157   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found existing private KVM network mk-ha-033260
	I0930 11:12:04.871294   26946 main.go:141] libmachine: (ha-033260-m02) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 ...
	I0930 11:12:04.871318   26946 main.go:141] libmachine: (ha-033260-m02) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:12:04.871364   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:04.871284   27323 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:12:04.871439   26946 main.go:141] libmachine: (ha-033260-m02) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:12:05.099309   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.099139   27323 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa...
	I0930 11:12:05.396113   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.395976   27323 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/ha-033260-m02.rawdisk...
	I0930 11:12:05.396137   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Writing magic tar header
	I0930 11:12:05.396150   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Writing SSH key tar header
	I0930 11:12:05.396161   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.396084   27323 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 ...
	I0930 11:12:05.396175   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02
	I0930 11:12:05.396200   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 (perms=drwx------)
	I0930 11:12:05.396209   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:12:05.396245   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:12:05.396258   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:12:05.396269   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:12:05.396285   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:12:05.396302   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:12:05.396315   26946 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:12:05.396331   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:12:05.396348   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:12:05.396365   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:12:05.396376   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:12:05.396390   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home
	I0930 11:12:05.396400   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Skipping /home - not owner
	I0930 11:12:05.397208   26946 main.go:141] libmachine: (ha-033260-m02) define libvirt domain using xml: 
	I0930 11:12:05.397237   26946 main.go:141] libmachine: (ha-033260-m02) <domain type='kvm'>
	I0930 11:12:05.397248   26946 main.go:141] libmachine: (ha-033260-m02)   <name>ha-033260-m02</name>
	I0930 11:12:05.397259   26946 main.go:141] libmachine: (ha-033260-m02)   <memory unit='MiB'>2200</memory>
	I0930 11:12:05.397267   26946 main.go:141] libmachine: (ha-033260-m02)   <vcpu>2</vcpu>
	I0930 11:12:05.397273   26946 main.go:141] libmachine: (ha-033260-m02)   <features>
	I0930 11:12:05.397282   26946 main.go:141] libmachine: (ha-033260-m02)     <acpi/>
	I0930 11:12:05.397289   26946 main.go:141] libmachine: (ha-033260-m02)     <apic/>
	I0930 11:12:05.397297   26946 main.go:141] libmachine: (ha-033260-m02)     <pae/>
	I0930 11:12:05.397306   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397314   26946 main.go:141] libmachine: (ha-033260-m02)   </features>
	I0930 11:12:05.397321   26946 main.go:141] libmachine: (ha-033260-m02)   <cpu mode='host-passthrough'>
	I0930 11:12:05.397329   26946 main.go:141] libmachine: (ha-033260-m02)   
	I0930 11:12:05.397335   26946 main.go:141] libmachine: (ha-033260-m02)   </cpu>
	I0930 11:12:05.397359   26946 main.go:141] libmachine: (ha-033260-m02)   <os>
	I0930 11:12:05.397379   26946 main.go:141] libmachine: (ha-033260-m02)     <type>hvm</type>
	I0930 11:12:05.397384   26946 main.go:141] libmachine: (ha-033260-m02)     <boot dev='cdrom'/>
	I0930 11:12:05.397391   26946 main.go:141] libmachine: (ha-033260-m02)     <boot dev='hd'/>
	I0930 11:12:05.397407   26946 main.go:141] libmachine: (ha-033260-m02)     <bootmenu enable='no'/>
	I0930 11:12:05.397419   26946 main.go:141] libmachine: (ha-033260-m02)   </os>
	I0930 11:12:05.397427   26946 main.go:141] libmachine: (ha-033260-m02)   <devices>
	I0930 11:12:05.397438   26946 main.go:141] libmachine: (ha-033260-m02)     <disk type='file' device='cdrom'>
	I0930 11:12:05.397450   26946 main.go:141] libmachine: (ha-033260-m02)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/boot2docker.iso'/>
	I0930 11:12:05.397461   26946 main.go:141] libmachine: (ha-033260-m02)       <target dev='hdc' bus='scsi'/>
	I0930 11:12:05.397468   26946 main.go:141] libmachine: (ha-033260-m02)       <readonly/>
	I0930 11:12:05.397480   26946 main.go:141] libmachine: (ha-033260-m02)     </disk>
	I0930 11:12:05.397492   26946 main.go:141] libmachine: (ha-033260-m02)     <disk type='file' device='disk'>
	I0930 11:12:05.397501   26946 main.go:141] libmachine: (ha-033260-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:12:05.397518   26946 main.go:141] libmachine: (ha-033260-m02)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/ha-033260-m02.rawdisk'/>
	I0930 11:12:05.397528   26946 main.go:141] libmachine: (ha-033260-m02)       <target dev='hda' bus='virtio'/>
	I0930 11:12:05.397538   26946 main.go:141] libmachine: (ha-033260-m02)     </disk>
	I0930 11:12:05.397548   26946 main.go:141] libmachine: (ha-033260-m02)     <interface type='network'>
	I0930 11:12:05.397565   26946 main.go:141] libmachine: (ha-033260-m02)       <source network='mk-ha-033260'/>
	I0930 11:12:05.397579   26946 main.go:141] libmachine: (ha-033260-m02)       <model type='virtio'/>
	I0930 11:12:05.397590   26946 main.go:141] libmachine: (ha-033260-m02)     </interface>
	I0930 11:12:05.397605   26946 main.go:141] libmachine: (ha-033260-m02)     <interface type='network'>
	I0930 11:12:05.397627   26946 main.go:141] libmachine: (ha-033260-m02)       <source network='default'/>
	I0930 11:12:05.397641   26946 main.go:141] libmachine: (ha-033260-m02)       <model type='virtio'/>
	I0930 11:12:05.397651   26946 main.go:141] libmachine: (ha-033260-m02)     </interface>
	I0930 11:12:05.397663   26946 main.go:141] libmachine: (ha-033260-m02)     <serial type='pty'>
	I0930 11:12:05.397672   26946 main.go:141] libmachine: (ha-033260-m02)       <target port='0'/>
	I0930 11:12:05.397683   26946 main.go:141] libmachine: (ha-033260-m02)     </serial>
	I0930 11:12:05.397693   26946 main.go:141] libmachine: (ha-033260-m02)     <console type='pty'>
	I0930 11:12:05.397702   26946 main.go:141] libmachine: (ha-033260-m02)       <target type='serial' port='0'/>
	I0930 11:12:05.397716   26946 main.go:141] libmachine: (ha-033260-m02)     </console>
	I0930 11:12:05.397728   26946 main.go:141] libmachine: (ha-033260-m02)     <rng model='virtio'>
	I0930 11:12:05.397739   26946 main.go:141] libmachine: (ha-033260-m02)       <backend model='random'>/dev/random</backend>
	I0930 11:12:05.397750   26946 main.go:141] libmachine: (ha-033260-m02)     </rng>
	I0930 11:12:05.397758   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397766   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397771   26946 main.go:141] libmachine: (ha-033260-m02)   </devices>
	I0930 11:12:05.397781   26946 main.go:141] libmachine: (ha-033260-m02) </domain>
	I0930 11:12:05.397794   26946 main.go:141] libmachine: (ha-033260-m02) 
	I0930 11:12:05.404924   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:91:42:82 in network default
	I0930 11:12:05.405500   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:05.405515   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring networks are active...
	I0930 11:12:05.406422   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring network default is active
	I0930 11:12:05.406717   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring network mk-ha-033260 is active
	I0930 11:12:05.407099   26946 main.go:141] libmachine: (ha-033260-m02) Getting domain xml...
	I0930 11:12:05.407766   26946 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:12:06.665629   26946 main.go:141] libmachine: (ha-033260-m02) Waiting to get IP...
	I0930 11:12:06.666463   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:06.666923   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:06.666983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:06.666914   27323 retry.go:31] will retry after 236.292128ms: waiting for machine to come up
	I0930 11:12:06.904458   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:06.904973   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:06.905008   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:06.904946   27323 retry.go:31] will retry after 373.72215ms: waiting for machine to come up
	I0930 11:12:07.280653   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:07.281148   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:07.281167   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:07.281127   27323 retry.go:31] will retry after 417.615707ms: waiting for machine to come up
	I0930 11:12:07.700723   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:07.701173   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:07.701199   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:07.701130   27323 retry.go:31] will retry after 495.480397ms: waiting for machine to come up
	I0930 11:12:08.198698   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:08.199207   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:08.199236   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:08.199183   27323 retry.go:31] will retry after 541.395524ms: waiting for machine to come up
	I0930 11:12:08.742190   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:08.742786   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:08.742812   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:08.742737   27323 retry.go:31] will retry after 711.22134ms: waiting for machine to come up
	I0930 11:12:09.455685   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:09.456147   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:09.456172   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:09.456119   27323 retry.go:31] will retry after 1.042420332s: waiting for machine to come up
	I0930 11:12:10.499804   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:10.500316   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:10.500353   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:10.500299   27323 retry.go:31] will retry after 1.048379902s: waiting for machine to come up
	I0930 11:12:11.550177   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:11.550587   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:11.550616   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:11.550525   27323 retry.go:31] will retry after 1.84570983s: waiting for machine to come up
	I0930 11:12:13.397532   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:13.398027   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:13.398052   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:13.397980   27323 retry.go:31] will retry after 1.566549945s: waiting for machine to come up
	I0930 11:12:14.966467   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:14.966938   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:14.966983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:14.966914   27323 retry.go:31] will retry after 1.814424901s: waiting for machine to come up
	I0930 11:12:16.783827   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:16.784216   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:16.784247   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:16.784177   27323 retry.go:31] will retry after 3.594354669s: waiting for machine to come up
	I0930 11:12:20.380537   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:20.380935   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:20.380960   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:20.380904   27323 retry.go:31] will retry after 3.199139157s: waiting for machine to come up
	I0930 11:12:23.582795   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:23.583206   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:23.583227   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:23.583181   27323 retry.go:31] will retry after 5.054668279s: waiting for machine to come up
	I0930 11:12:28.639867   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.640504   26946 main.go:141] libmachine: (ha-033260-m02) Found IP for machine: 192.168.39.3
	I0930 11:12:28.640526   26946 main.go:141] libmachine: (ha-033260-m02) Reserving static IP address...
	I0930 11:12:28.640539   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.641001   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"} in network mk-ha-033260
	I0930 11:12:28.722236   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Getting to WaitForSSH function...
	I0930 11:12:28.722267   26946 main.go:141] libmachine: (ha-033260-m02) Reserved static IP address: 192.168.39.3
	I0930 11:12:28.722280   26946 main.go:141] libmachine: (ha-033260-m02) Waiting for SSH to be available...
	I0930 11:12:28.724853   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.725241   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.725265   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.725515   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH client type: external
	I0930 11:12:28.725540   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa (-rw-------)
	I0930 11:12:28.725576   26946 main.go:141] libmachine: (ha-033260-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:12:28.725598   26946 main.go:141] libmachine: (ha-033260-m02) DBG | About to run SSH command:
	I0930 11:12:28.725610   26946 main.go:141] libmachine: (ha-033260-m02) DBG | exit 0
	I0930 11:12:28.854399   26946 main.go:141] libmachine: (ha-033260-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 11:12:28.854625   26946 main.go:141] libmachine: (ha-033260-m02) KVM machine creation complete!
	I0930 11:12:28.855272   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:28.855866   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:28.856047   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:28.856170   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:12:28.856182   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:12:28.857578   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:12:28.857593   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:12:28.857600   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:12:28.857606   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:28.859889   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.860246   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.860279   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.860438   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:28.860622   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.860773   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.860913   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:28.861114   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:28.861325   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:28.861337   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:12:28.973157   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:12:28.973184   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:12:28.973195   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:28.976106   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.976500   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.976531   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.976798   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:28.977021   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.977185   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.977339   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:28.977493   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:28.977714   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:28.977727   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:12:29.086855   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:12:29.086927   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:12:29.086937   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:12:29.086951   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.087245   26946 buildroot.go:166] provisioning hostname "ha-033260-m02"
	I0930 11:12:29.087269   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.087463   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.090156   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.090525   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.090551   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.090676   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.090846   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.090986   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.091115   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.091289   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.091467   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.091479   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m02 && echo "ha-033260-m02" | sudo tee /etc/hostname
	I0930 11:12:29.220174   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m02
	
	I0930 11:12:29.220204   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.223091   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.223537   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.223567   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.223724   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.223905   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.224048   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.224217   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.224385   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.224590   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.224614   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:12:29.343733   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:12:29.343767   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:12:29.343787   26946 buildroot.go:174] setting up certificates
	I0930 11:12:29.343798   26946 provision.go:84] configureAuth start
	I0930 11:12:29.343811   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.344093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:29.346631   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.346930   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.346956   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.347096   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.349248   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.349664   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.349689   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.349858   26946 provision.go:143] copyHostCerts
	I0930 11:12:29.349889   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:12:29.349936   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:12:29.349948   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:12:29.350055   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:12:29.350156   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:12:29.350176   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:12:29.350181   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:12:29.350207   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:12:29.350254   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:12:29.350271   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:12:29.350277   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:12:29.350298   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:12:29.350347   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m02 san=[127.0.0.1 192.168.39.3 ha-033260-m02 localhost minikube]
	I0930 11:12:29.533329   26946 provision.go:177] copyRemoteCerts
	I0930 11:12:29.533387   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:12:29.533409   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.535946   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.536287   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.536327   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.536541   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.536745   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.536906   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.537054   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:29.625264   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:12:29.625353   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:12:29.651589   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:12:29.651644   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:12:29.677526   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:12:29.677634   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:12:29.708210   26946 provision.go:87] duration metric: took 364.395657ms to configureAuth
	I0930 11:12:29.708246   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:12:29.708446   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:29.708540   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.711111   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.711545   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.711578   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.711743   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.711914   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.712073   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.712191   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.712381   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.712587   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.712611   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:12:29.956548   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:12:29.956576   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:12:29.956585   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetURL
	I0930 11:12:29.957861   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using libvirt version 6000000
	I0930 11:12:29.959943   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.960349   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.960376   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.960589   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:12:29.960605   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:12:29.960611   26946 client.go:171] duration metric: took 25.091713434s to LocalClient.Create
	I0930 11:12:29.960635   26946 start.go:167] duration metric: took 25.091779085s to libmachine.API.Create "ha-033260"
	I0930 11:12:29.960649   26946 start.go:293] postStartSetup for "ha-033260-m02" (driver="kvm2")
	I0930 11:12:29.960663   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:12:29.960682   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:29.960894   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:12:29.960921   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.962943   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.963366   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.963390   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.963547   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.963747   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.963887   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.963995   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.049684   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:12:30.054345   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:12:30.054373   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:12:30.054430   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:12:30.054507   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:12:30.054516   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:12:30.054592   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:12:30.064685   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:12:30.090069   26946 start.go:296] duration metric: took 129.405576ms for postStartSetup
	I0930 11:12:30.090127   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:30.090769   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:30.093475   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.093805   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.093836   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.094011   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:30.094269   26946 start.go:128] duration metric: took 25.244614564s to createHost
	I0930 11:12:30.094293   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:30.096188   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.096490   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.096524   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.096656   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.096825   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.096963   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.097093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.097253   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:30.097426   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:30.097439   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:12:30.206856   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694750.184612585
	
	I0930 11:12:30.206885   26946 fix.go:216] guest clock: 1727694750.184612585
	I0930 11:12:30.206895   26946 fix.go:229] Guest: 2024-09-30 11:12:30.184612585 +0000 UTC Remote: 2024-09-30 11:12:30.094281951 +0000 UTC m=+73.159334041 (delta=90.330634ms)
	I0930 11:12:30.206915   26946 fix.go:200] guest clock delta is within tolerance: 90.330634ms
	I0930 11:12:30.206922   26946 start.go:83] releasing machines lock for "ha-033260-m02", held for 25.357361614s
	I0930 11:12:30.206944   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.207256   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:30.209590   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.209935   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.209964   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.212335   26946 out.go:177] * Found network options:
	I0930 11:12:30.213673   26946 out.go:177]   - NO_PROXY=192.168.39.249
	W0930 11:12:30.215021   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:12:30.215056   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215673   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215843   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215938   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:12:30.215976   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	W0930 11:12:30.215983   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:12:30.216054   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:12:30.216075   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:30.218771   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.218983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219125   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.219147   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219360   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.219434   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.219459   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219516   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.219662   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.219670   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.219831   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.219846   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.219963   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.220088   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.454192   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:12:30.462288   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:12:30.462348   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:12:30.479853   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:12:30.479878   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:12:30.479941   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:12:30.496617   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:12:30.512078   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:12:30.512142   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:12:30.526557   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:12:30.541136   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:12:30.655590   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:12:30.814049   26946 docker.go:233] disabling docker service ...
	I0930 11:12:30.814123   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:12:30.829972   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:12:30.844068   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:12:30.969831   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:12:31.096443   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:12:31.111612   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:12:31.131553   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:12:31.131621   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.143596   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:12:31.143658   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.156112   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.167422   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.179559   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:12:31.192037   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.203507   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.222188   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.234115   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:12:31.245344   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:12:31.245401   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:12:31.259589   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:12:31.269907   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:31.388443   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:12:31.482864   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:12:31.482933   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:12:31.487957   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:12:31.488026   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:12:31.492173   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:12:31.530740   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:12:31.530821   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:12:31.560435   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:12:31.592377   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:12:31.593888   26946 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:12:31.595254   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:31.598165   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:31.598504   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:31.598535   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:31.598710   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:12:31.603081   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:31.616231   26946 mustload.go:65] Loading cluster: ha-033260
	I0930 11:12:31.616424   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:31.616676   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:31.616714   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:31.631793   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
	I0930 11:12:31.632254   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:31.632734   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:31.632757   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:31.633092   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:31.633272   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:31.634860   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:31.635130   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:31.635170   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:31.649687   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0930 11:12:31.650053   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:31.650497   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:31.650520   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:31.650803   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:31.650951   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:31.651118   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.3
	I0930 11:12:31.651130   26946 certs.go:194] generating shared ca certs ...
	I0930 11:12:31.651148   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.651260   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:12:31.651304   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:12:31.651313   26946 certs.go:256] generating profile certs ...
	I0930 11:12:31.651410   26946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:12:31.651435   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87
	I0930 11:12:31.651449   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.254]
	I0930 11:12:31.912914   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 ...
	I0930 11:12:31.912947   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87: {Name:mk5789d867ee86689334498533835b6baa525e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.913110   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87 ...
	I0930 11:12:31.913123   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87: {Name:mkcd56431095ebd059864bd581ed7c141670cf4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.913195   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:12:31.913335   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:12:31.913463   26946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:12:31.913478   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:12:31.913490   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:12:31.913500   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:12:31.913510   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:12:31.913520   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:12:31.913529   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:12:31.913539   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:12:31.913551   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:12:31.913591   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:12:31.913648   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:12:31.913661   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:12:31.913690   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:12:31.913712   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:12:31.913735   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:12:31.913780   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:12:31.913806   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:31.913824   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:12:31.913836   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:12:31.913865   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:31.917099   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:31.917453   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:31.917482   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:31.917675   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:31.917892   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:31.918041   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:31.918169   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:31.994019   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:12:31.999621   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:12:32.012410   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:12:32.017661   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:12:32.028991   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:12:32.034566   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:12:32.047607   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:12:32.052664   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:12:32.069473   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:12:32.074705   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:12:32.086100   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:12:32.090557   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:12:32.103048   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:12:32.132371   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:12:32.159806   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:12:32.185933   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:12:32.210826   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 11:12:32.236862   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:12:32.262441   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:12:32.289773   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:12:32.318287   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:12:32.347371   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:12:32.372327   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:12:32.397781   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:12:32.415260   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:12:32.433137   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:12:32.450661   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:12:32.467444   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:12:32.484994   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:12:32.503412   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:12:32.522919   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:12:32.529057   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:12:32.541643   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.546691   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.546753   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.553211   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:12:32.565054   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:12:32.576855   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.581764   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.581818   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.588983   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:12:32.602082   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:12:32.613340   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.617722   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.617775   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.623445   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:12:32.635275   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:12:32.639755   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:12:32.639812   26946 kubeadm.go:934] updating node {m02 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 11:12:32.639905   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:12:32.639928   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:12:32.639958   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:12:32.657152   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:12:32.657231   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:12:32.657301   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:12:32.669072   26946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 11:12:32.669126   26946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 11:12:32.681078   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 11:12:32.681102   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:12:32.681147   26946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0930 11:12:32.681159   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:12:32.681202   26946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0930 11:12:32.685896   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 11:12:32.685930   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 11:12:33.355089   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:12:33.355169   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:12:33.360551   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 11:12:33.360593   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 11:12:33.497331   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:12:33.536292   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:12:33.536381   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:12:33.556993   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 11:12:33.557034   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 11:12:33.963212   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:12:33.973956   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0930 11:12:33.992407   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:12:34.010174   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:12:34.027647   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:12:34.031715   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:34.045021   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:34.164493   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:34.181854   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:34.182385   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:34.182436   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:34.197448   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0930 11:12:34.197925   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:34.198415   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:34.198439   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:34.198777   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:34.199019   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:34.199179   26946 start.go:317] joinCluster: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:12:34.199281   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 11:12:34.199296   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:34.202318   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:34.202754   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:34.202783   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:34.202947   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:34.203150   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:34.203332   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:34.203477   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:34.356774   26946 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:34.356813   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hn6im1.2otceyiojx5fmqqd --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m02 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443"
	I0930 11:12:56.361665   26946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hn6im1.2otceyiojx5fmqqd --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m02 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443": (22.004830324s)
	I0930 11:12:56.361703   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 11:12:57.091049   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260-m02 minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=false
	I0930 11:12:57.252660   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-033260-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 11:12:57.383009   26946 start.go:319] duration metric: took 23.183825523s to joinCluster
	I0930 11:12:57.383083   26946 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:57.383372   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:57.384696   26946 out.go:177] * Verifying Kubernetes components...
	I0930 11:12:57.385781   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:57.652948   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:57.700673   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:57.700909   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:12:57.700967   26946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:12:57.701166   26946 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:12:57.701263   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:57.701272   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:57.701283   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:57.701288   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:57.710787   26946 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0930 11:12:58.201703   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:58.201723   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:58.201733   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:58.201738   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:58.218761   26946 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0930 11:12:58.701415   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:58.701436   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:58.701444   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:58.701447   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:58.707425   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:12:59.202375   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:59.202398   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:59.202410   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:59.202416   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:59.206657   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:12:59.701590   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:59.701611   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:59.701635   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:59.701642   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:59.706264   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:12:59.707024   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:00.201877   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:00.201901   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:00.201917   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:00.201924   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:00.205419   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:00.701357   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:00.701378   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:00.701386   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:00.701391   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:00.706252   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:01.202282   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:01.202307   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:01.202319   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:01.202325   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:01.206013   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:01.701738   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:01.701760   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:01.701768   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:01.701773   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:01.705302   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:02.202004   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:02.202030   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:02.202043   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:02.202051   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:02.205535   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:02.206136   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:02.701406   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:02.701427   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:02.701436   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:02.701440   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:02.704929   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:03.202160   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:03.202189   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:03.202198   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:03.202204   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:03.205838   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:03.701797   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:03.701821   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:03.701832   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:03.701841   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:03.706107   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:04.201592   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:04.201623   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:04.201634   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:04.201641   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:04.204858   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:04.701789   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:04.701812   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:04.701825   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:04.701831   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:04.710541   26946 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:13:04.711317   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:05.202211   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:05.202237   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:05.202248   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:05.202255   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:05.206000   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:05.702240   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:05.702263   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:05.702272   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:05.702276   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:05.713473   26946 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0930 11:13:06.201370   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:06.201398   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:06.201412   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:06.201421   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:06.205062   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:06.702136   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:06.702157   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:06.702170   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:06.702178   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:06.707226   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:07.201911   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:07.201933   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:07.201941   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:07.201947   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:07.205398   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:07.206056   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:07.702203   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:07.702228   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:07.702236   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:07.702240   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:07.705652   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:08.201364   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:08.201385   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:08.201393   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:08.201397   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:08.204682   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:08.701564   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:08.701585   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:08.701593   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:08.701597   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:08.704941   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:09.201826   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:09.201874   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:09.201887   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:09.201892   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:09.205730   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:09.206265   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:09.701548   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:09.701576   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:09.701584   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:09.701588   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:09.704970   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:10.202351   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:10.202382   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:10.202393   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:10.202402   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:10.205886   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:10.701694   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:10.701717   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:10.701725   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:10.701729   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:10.705252   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:11.202235   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:11.202256   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:11.202264   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:11.202267   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:11.205904   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:11.206456   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:11.701817   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:11.701840   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:11.701848   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:11.701852   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:11.705418   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:12.202233   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:12.202257   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:12.202267   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:12.202273   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:12.206552   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:12.701910   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:12.701932   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:12.701940   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:12.701944   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:12.705423   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.201690   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:13.201715   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:13.201727   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:13.201733   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:13.205360   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.701378   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:13.701402   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:13.701410   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:13.701416   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:13.704921   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.705712   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:14.202280   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.202303   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.202313   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.202317   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.206153   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.701500   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.701536   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.701545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.701549   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.705110   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.705891   26946 node_ready.go:49] node "ha-033260-m02" has status "Ready":"True"
	I0930 11:13:14.705919   26946 node_ready.go:38] duration metric: took 17.004728232s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:13:14.705930   26946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:13:14.706003   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:14.706012   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.706019   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.706027   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.710637   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:14.717034   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.717112   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:13:14.717120   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.717127   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.717132   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.720167   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.720847   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.720863   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.720870   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.720874   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.723869   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:13:14.724515   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.724535   26946 pod_ready.go:82] duration metric: took 7.4758ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.724545   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.724613   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:13:14.724621   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.724628   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.724634   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.727903   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.728724   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.728741   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.728751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.728757   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.731653   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:13:14.732553   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.732574   26946 pod_ready.go:82] duration metric: took 8.020759ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.732586   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.732653   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:13:14.732664   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.732674   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.732682   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.735972   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.736968   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.736990   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.737001   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.737006   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.742593   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:14.743126   26946 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.743157   26946 pod_ready.go:82] duration metric: took 10.560613ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.743170   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.743261   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:13:14.743274   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.743284   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.743295   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.746988   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.747647   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.747666   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.747678   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.747685   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.752616   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:14.753409   26946 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.753424   26946 pod_ready.go:82] duration metric: took 10.242469ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.753437   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.901974   26946 request.go:632] Waited for 148.458979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:13:14.902036   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:13:14.902043   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.902055   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.902060   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.905987   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.101905   26946 request.go:632] Waited for 195.35281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.101994   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.102002   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.102014   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.102020   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.106060   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:15.106613   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.106631   26946 pod_ready.go:82] duration metric: took 353.188275ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.106640   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.301775   26946 request.go:632] Waited for 195.071866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:13:15.301852   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:13:15.301859   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.301869   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.301877   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.305432   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.502470   26946 request.go:632] Waited for 196.425957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:15.502545   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:15.502550   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.502559   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.502564   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.506368   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.506795   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.506815   26946 pod_ready.go:82] duration metric: took 400.168693ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.506824   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.702050   26946 request.go:632] Waited for 195.162388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:13:15.702133   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:13:15.702141   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.702152   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.702163   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.705891   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.901957   26946 request.go:632] Waited for 195.415244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.902015   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.902032   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.902045   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.902050   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.905760   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.906550   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.906568   26946 pod_ready.go:82] duration metric: took 399.738814ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.906577   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.101960   26946 request.go:632] Waited for 195.295618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:13:16.102015   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:13:16.102020   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.102027   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.102034   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.105657   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.301949   26946 request.go:632] Waited for 195.400353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.302010   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.302015   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.302022   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.302028   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.306149   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:16.306664   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:16.306684   26946 pod_ready.go:82] duration metric: took 400.100909ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.306693   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.501852   26946 request.go:632] Waited for 195.093896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:13:16.501929   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:13:16.501936   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.501944   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.501948   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.505624   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.702111   26946 request.go:632] Waited for 195.755005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.702172   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.702201   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.702232   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.702242   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.706191   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.706772   26946 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:16.706793   26946 pod_ready.go:82] duration metric: took 400.093034ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.706806   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.901822   26946 request.go:632] Waited for 194.939903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:13:16.901874   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:13:16.901878   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.901886   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.901890   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.905939   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:17.102468   26946 request.go:632] Waited for 195.869654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.102551   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.102559   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.102570   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.102576   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.105889   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.106573   26946 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.106594   26946 pod_ready.go:82] duration metric: took 399.778126ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.106605   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.301593   26946 request.go:632] Waited for 194.913576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:13:17.301653   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:13:17.301658   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.301671   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.301678   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.305178   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.502249   26946 request.go:632] Waited for 196.387698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.502326   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.502350   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.502358   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.502362   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.505833   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.506907   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.506935   26946 pod_ready.go:82] duration metric: took 400.319251ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.506948   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.701919   26946 request.go:632] Waited for 194.9063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:13:17.701999   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:13:17.702006   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.702017   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.702028   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.705520   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.902402   26946 request.go:632] Waited for 196.207639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:17.902477   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:17.902485   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.902500   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.902526   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.906656   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:17.907109   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.907128   26946 pod_ready.go:82] duration metric: took 400.172408ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.907142   26946 pod_ready.go:39] duration metric: took 3.201195785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:13:17.907159   26946 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:13:17.907218   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:13:17.923202   26946 api_server.go:72] duration metric: took 20.540084285s to wait for apiserver process to appear ...
	I0930 11:13:17.923232   26946 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:13:17.923251   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:13:17.929517   26946 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:13:17.929596   26946 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:13:17.929602   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.929631   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.929636   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.930581   26946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:13:17.930807   26946 api_server.go:141] control plane version: v1.31.1
	I0930 11:13:17.930834   26946 api_server.go:131] duration metric: took 7.593991ms to wait for apiserver health ...
	I0930 11:13:17.930843   26946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:13:18.102359   26946 request.go:632] Waited for 171.419304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.102425   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.102433   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.102442   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.102449   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.107679   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:18.114591   26946 system_pods.go:59] 17 kube-system pods found
	I0930 11:13:18.114717   26946 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:13:18.114749   26946 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:13:18.114780   26946 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:13:18.114803   26946 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:13:18.114826   26946 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:13:18.114841   26946 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:13:18.114876   26946 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:13:18.114899   26946 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:13:18.114915   26946 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:13:18.114935   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:13:18.114950   26946 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:13:18.114975   26946 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:13:18.114997   26946 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:13:18.115011   26946 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:13:18.115025   26946 system_pods.go:61] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:13:18.115059   26946 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:13:18.115132   26946 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:13:18.115146   26946 system_pods.go:74] duration metric: took 184.295086ms to wait for pod list to return data ...
	I0930 11:13:18.115155   26946 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:13:18.301606   26946 request.go:632] Waited for 186.324564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:13:18.301691   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:13:18.301697   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.301704   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.301708   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.305792   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.306031   26946 default_sa.go:45] found service account: "default"
	I0930 11:13:18.306053   26946 default_sa.go:55] duration metric: took 190.887438ms for default service account to be created ...
	I0930 11:13:18.306064   26946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:13:18.502520   26946 request.go:632] Waited for 196.381212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.502574   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.502580   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.502589   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.502594   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.507606   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.513786   26946 system_pods.go:86] 17 kube-system pods found
	I0930 11:13:18.513814   26946 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:13:18.513820   26946 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:13:18.513824   26946 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:13:18.513828   26946 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:13:18.513832   26946 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:13:18.513835   26946 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:13:18.513838   26946 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:13:18.513842   26946 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:13:18.513845   26946 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:13:18.513849   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:13:18.513852   26946 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:13:18.513855   26946 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:13:18.513858   26946 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:13:18.513864   26946 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:13:18.513868   26946 system_pods.go:89] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:13:18.513871   26946 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:13:18.513874   26946 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:13:18.513883   26946 system_pods.go:126] duration metric: took 207.809961ms to wait for k8s-apps to be running ...
	I0930 11:13:18.513889   26946 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:13:18.513933   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:13:18.530491   26946 system_svc.go:56] duration metric: took 16.594303ms WaitForService to wait for kubelet
	I0930 11:13:18.530520   26946 kubeadm.go:582] duration metric: took 21.147406438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:13:18.530536   26946 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:13:18.701935   26946 request.go:632] Waited for 171.311845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:13:18.701998   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:13:18.702004   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.702013   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.702020   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.706454   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.707258   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:13:18.707286   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:13:18.707302   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:13:18.707309   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:13:18.707315   26946 node_conditions.go:105] duration metric: took 176.773141ms to run NodePressure ...
	I0930 11:13:18.707329   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:13:18.707365   26946 start.go:255] writing updated cluster config ...
	I0930 11:13:18.709744   26946 out.go:201] 
	I0930 11:13:18.711365   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:18.711455   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:18.713157   26946 out.go:177] * Starting "ha-033260-m03" control-plane node in "ha-033260" cluster
	I0930 11:13:18.714611   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:13:18.714636   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:13:18.714744   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:13:18.714757   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:13:18.714852   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:18.715040   26946 start.go:360] acquireMachinesLock for ha-033260-m03: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:13:18.715084   26946 start.go:364] duration metric: took 25.338µs to acquireMachinesLock for "ha-033260-m03"
	I0930 11:13:18.715101   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:13:18.715188   26946 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0930 11:13:18.716794   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:13:18.716894   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:18.716928   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:18.732600   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42281
	I0930 11:13:18.733109   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:18.733561   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:18.733575   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:18.733910   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:18.734089   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:18.734238   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:18.734421   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:13:18.734451   26946 client.go:168] LocalClient.Create starting
	I0930 11:13:18.734489   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:13:18.734529   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:13:18.734544   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:13:18.734600   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:13:18.734619   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:13:18.734631   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:13:18.734648   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:13:18.734656   26946 main.go:141] libmachine: (ha-033260-m03) Calling .PreCreateCheck
	I0930 11:13:18.734797   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:18.735196   26946 main.go:141] libmachine: Creating machine...
	I0930 11:13:18.735209   26946 main.go:141] libmachine: (ha-033260-m03) Calling .Create
	I0930 11:13:18.735336   26946 main.go:141] libmachine: (ha-033260-m03) Creating KVM machine...
	I0930 11:13:18.736643   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found existing default KVM network
	I0930 11:13:18.736820   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found existing private KVM network mk-ha-033260
	I0930 11:13:18.736982   26946 main.go:141] libmachine: (ha-033260-m03) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 ...
	I0930 11:13:18.737011   26946 main.go:141] libmachine: (ha-033260-m03) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:13:18.737118   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:18.736992   27716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:13:18.737204   26946 main.go:141] libmachine: (ha-033260-m03) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:13:18.965830   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:18.965684   27716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa...
	I0930 11:13:19.182387   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:19.182221   27716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/ha-033260-m03.rawdisk...
	I0930 11:13:19.182427   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Writing magic tar header
	I0930 11:13:19.182442   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Writing SSH key tar header
	I0930 11:13:19.182454   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:19.182378   27716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 ...
	I0930 11:13:19.182548   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03
	I0930 11:13:19.182570   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:13:19.182578   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 (perms=drwx------)
	I0930 11:13:19.182587   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:13:19.182596   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:13:19.182610   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:13:19.182620   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:13:19.182634   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:13:19.182647   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:13:19.182661   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:13:19.182678   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:13:19.182687   26946 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:13:19.182699   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:13:19.182796   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home
	I0930 11:13:19.182820   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Skipping /home - not owner
	I0930 11:13:19.183716   26946 main.go:141] libmachine: (ha-033260-m03) define libvirt domain using xml: 
	I0930 11:13:19.183740   26946 main.go:141] libmachine: (ha-033260-m03) <domain type='kvm'>
	I0930 11:13:19.183766   26946 main.go:141] libmachine: (ha-033260-m03)   <name>ha-033260-m03</name>
	I0930 11:13:19.183787   26946 main.go:141] libmachine: (ha-033260-m03)   <memory unit='MiB'>2200</memory>
	I0930 11:13:19.183800   26946 main.go:141] libmachine: (ha-033260-m03)   <vcpu>2</vcpu>
	I0930 11:13:19.183806   26946 main.go:141] libmachine: (ha-033260-m03)   <features>
	I0930 11:13:19.183817   26946 main.go:141] libmachine: (ha-033260-m03)     <acpi/>
	I0930 11:13:19.183827   26946 main.go:141] libmachine: (ha-033260-m03)     <apic/>
	I0930 11:13:19.183836   26946 main.go:141] libmachine: (ha-033260-m03)     <pae/>
	I0930 11:13:19.183845   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.183853   26946 main.go:141] libmachine: (ha-033260-m03)   </features>
	I0930 11:13:19.183861   26946 main.go:141] libmachine: (ha-033260-m03)   <cpu mode='host-passthrough'>
	I0930 11:13:19.183868   26946 main.go:141] libmachine: (ha-033260-m03)   
	I0930 11:13:19.183881   26946 main.go:141] libmachine: (ha-033260-m03)   </cpu>
	I0930 11:13:19.183892   26946 main.go:141] libmachine: (ha-033260-m03)   <os>
	I0930 11:13:19.183902   26946 main.go:141] libmachine: (ha-033260-m03)     <type>hvm</type>
	I0930 11:13:19.183911   26946 main.go:141] libmachine: (ha-033260-m03)     <boot dev='cdrom'/>
	I0930 11:13:19.183924   26946 main.go:141] libmachine: (ha-033260-m03)     <boot dev='hd'/>
	I0930 11:13:19.183936   26946 main.go:141] libmachine: (ha-033260-m03)     <bootmenu enable='no'/>
	I0930 11:13:19.183942   26946 main.go:141] libmachine: (ha-033260-m03)   </os>
	I0930 11:13:19.183951   26946 main.go:141] libmachine: (ha-033260-m03)   <devices>
	I0930 11:13:19.183961   26946 main.go:141] libmachine: (ha-033260-m03)     <disk type='file' device='cdrom'>
	I0930 11:13:19.183975   26946 main.go:141] libmachine: (ha-033260-m03)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/boot2docker.iso'/>
	I0930 11:13:19.183985   26946 main.go:141] libmachine: (ha-033260-m03)       <target dev='hdc' bus='scsi'/>
	I0930 11:13:19.183993   26946 main.go:141] libmachine: (ha-033260-m03)       <readonly/>
	I0930 11:13:19.184007   26946 main.go:141] libmachine: (ha-033260-m03)     </disk>
	I0930 11:13:19.184019   26946 main.go:141] libmachine: (ha-033260-m03)     <disk type='file' device='disk'>
	I0930 11:13:19.184028   26946 main.go:141] libmachine: (ha-033260-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:13:19.184041   26946 main.go:141] libmachine: (ha-033260-m03)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/ha-033260-m03.rawdisk'/>
	I0930 11:13:19.184052   26946 main.go:141] libmachine: (ha-033260-m03)       <target dev='hda' bus='virtio'/>
	I0930 11:13:19.184065   26946 main.go:141] libmachine: (ha-033260-m03)     </disk>
	I0930 11:13:19.184076   26946 main.go:141] libmachine: (ha-033260-m03)     <interface type='network'>
	I0930 11:13:19.184137   26946 main.go:141] libmachine: (ha-033260-m03)       <source network='mk-ha-033260'/>
	I0930 11:13:19.184167   26946 main.go:141] libmachine: (ha-033260-m03)       <model type='virtio'/>
	I0930 11:13:19.184179   26946 main.go:141] libmachine: (ha-033260-m03)     </interface>
	I0930 11:13:19.184187   26946 main.go:141] libmachine: (ha-033260-m03)     <interface type='network'>
	I0930 11:13:19.184197   26946 main.go:141] libmachine: (ha-033260-m03)       <source network='default'/>
	I0930 11:13:19.184205   26946 main.go:141] libmachine: (ha-033260-m03)       <model type='virtio'/>
	I0930 11:13:19.184215   26946 main.go:141] libmachine: (ha-033260-m03)     </interface>
	I0930 11:13:19.184223   26946 main.go:141] libmachine: (ha-033260-m03)     <serial type='pty'>
	I0930 11:13:19.184242   26946 main.go:141] libmachine: (ha-033260-m03)       <target port='0'/>
	I0930 11:13:19.184249   26946 main.go:141] libmachine: (ha-033260-m03)     </serial>
	I0930 11:13:19.184259   26946 main.go:141] libmachine: (ha-033260-m03)     <console type='pty'>
	I0930 11:13:19.184267   26946 main.go:141] libmachine: (ha-033260-m03)       <target type='serial' port='0'/>
	I0930 11:13:19.184277   26946 main.go:141] libmachine: (ha-033260-m03)     </console>
	I0930 11:13:19.184285   26946 main.go:141] libmachine: (ha-033260-m03)     <rng model='virtio'>
	I0930 11:13:19.184297   26946 main.go:141] libmachine: (ha-033260-m03)       <backend model='random'>/dev/random</backend>
	I0930 11:13:19.184305   26946 main.go:141] libmachine: (ha-033260-m03)     </rng>
	I0930 11:13:19.184313   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.184326   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.184337   26946 main.go:141] libmachine: (ha-033260-m03)   </devices>
	I0930 11:13:19.184344   26946 main.go:141] libmachine: (ha-033260-m03) </domain>
	I0930 11:13:19.184355   26946 main.go:141] libmachine: (ha-033260-m03) 
	I0930 11:13:19.191067   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:09:7f:ae in network default
	I0930 11:13:19.191719   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring networks are active...
	I0930 11:13:19.191738   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:19.192592   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring network default is active
	I0930 11:13:19.192924   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring network mk-ha-033260 is active
	I0930 11:13:19.193268   26946 main.go:141] libmachine: (ha-033260-m03) Getting domain xml...
	I0930 11:13:19.193941   26946 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:13:20.468738   26946 main.go:141] libmachine: (ha-033260-m03) Waiting to get IP...
	I0930 11:13:20.469515   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:20.469944   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:20.469970   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:20.469926   27716 retry.go:31] will retry after 232.398954ms: waiting for machine to come up
	I0930 11:13:20.704544   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:20.704996   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:20.705026   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:20.704955   27716 retry.go:31] will retry after 380.728938ms: waiting for machine to come up
	I0930 11:13:21.087407   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.087831   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.087853   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.087810   27716 retry.go:31] will retry after 405.871711ms: waiting for machine to come up
	I0930 11:13:21.495366   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.495857   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.495885   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.495810   27716 retry.go:31] will retry after 380.57456ms: waiting for machine to come up
	I0930 11:13:21.878262   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.878697   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.878718   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.878678   27716 retry.go:31] will retry after 486.639816ms: waiting for machine to come up
	I0930 11:13:22.367485   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:22.367998   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:22.368026   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:22.367946   27716 retry.go:31] will retry after 818.869274ms: waiting for machine to come up
	I0930 11:13:23.187832   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:23.188286   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:23.188306   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:23.188246   27716 retry.go:31] will retry after 870.541242ms: waiting for machine to come up
	I0930 11:13:24.060866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:24.061364   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:24.061403   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:24.061339   27716 retry.go:31] will retry after 1.026163442s: waiting for machine to come up
	I0930 11:13:25.089407   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:25.089859   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:25.089889   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:25.089789   27716 retry.go:31] will retry after 1.677341097s: waiting for machine to come up
	I0930 11:13:26.769716   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:26.770127   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:26.770173   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:26.770102   27716 retry.go:31] will retry after 2.102002194s: waiting for machine to come up
	I0930 11:13:28.873495   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:28.874089   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:28.874118   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:28.874042   27716 retry.go:31] will retry after 2.512249945s: waiting for machine to come up
	I0930 11:13:31.388375   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:31.388813   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:31.388842   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:31.388766   27716 retry.go:31] will retry after 3.025058152s: waiting for machine to come up
	I0930 11:13:34.415391   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:34.415806   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:34.415826   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:34.415764   27716 retry.go:31] will retry after 3.6491044s: waiting for machine to come up
	I0930 11:13:38.067512   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:38.067932   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:38.067957   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:38.067891   27716 retry.go:31] will retry after 5.462753525s: waiting for machine to come up
	I0930 11:13:43.535257   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.535767   26946 main.go:141] libmachine: (ha-033260-m03) Found IP for machine: 192.168.39.238
	I0930 11:13:43.535792   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.535800   26946 main.go:141] libmachine: (ha-033260-m03) Reserving static IP address...
	I0930 11:13:43.536253   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"} in network mk-ha-033260
	I0930 11:13:43.612168   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:13:43.612200   26946 main.go:141] libmachine: (ha-033260-m03) Reserved static IP address: 192.168.39.238
	I0930 11:13:43.612213   26946 main.go:141] libmachine: (ha-033260-m03) Waiting for SSH to be available...
	I0930 11:13:43.614758   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.615073   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260
	I0930 11:13:43.615102   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find defined IP address of network mk-ha-033260 interface with MAC address 52:54:00:f2:70:c8
	I0930 11:13:43.615180   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:13:43.615208   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:13:43.615240   26946 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:13:43.615252   26946 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:13:43.615269   26946 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:13:43.619189   26946 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: exit status 255: 
	I0930 11:13:43.619212   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 11:13:43.619222   26946 main.go:141] libmachine: (ha-033260-m03) DBG | command : exit 0
	I0930 11:13:43.619233   26946 main.go:141] libmachine: (ha-033260-m03) DBG | err     : exit status 255
	I0930 11:13:43.619246   26946 main.go:141] libmachine: (ha-033260-m03) DBG | output  : 
	I0930 11:13:46.621877   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:13:46.624327   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.624849   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.624873   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.625052   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:13:46.625075   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:13:46.625113   26946 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:13:46.625125   26946 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:13:46.625137   26946 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:13:46.749932   26946 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 11:13:46.750211   26946 main.go:141] libmachine: (ha-033260-m03) KVM machine creation complete!
	I0930 11:13:46.750551   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:46.751116   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:46.751371   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:46.751553   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:13:46.751568   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:13:46.752698   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:13:46.752714   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:13:46.752721   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:13:46.752728   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.755296   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.755714   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.755738   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.755877   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.756027   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.756136   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.756284   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.756448   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.756639   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.756651   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:13:46.857068   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:13:46.857090   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:13:46.857097   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.859904   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.860340   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.860372   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.860564   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.860899   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.861065   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.861200   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.861350   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.861511   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.861526   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:13:46.970453   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:13:46.970520   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:13:46.970534   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:13:46.970543   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:46.970766   26946 buildroot.go:166] provisioning hostname "ha-033260-m03"
	I0930 11:13:46.970791   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:46.970955   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.973539   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.973929   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.973956   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.974221   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.974372   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.974556   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.974665   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.974786   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.974938   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.974953   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m03 && echo "ha-033260-m03" | sudo tee /etc/hostname
	I0930 11:13:47.087604   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m03
	
	I0930 11:13:47.087636   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.090559   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.090866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.090895   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.091089   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.091283   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.091400   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.091516   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.091649   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.091811   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.091834   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:13:47.203919   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:13:47.203950   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:13:47.203969   26946 buildroot.go:174] setting up certificates
	I0930 11:13:47.203977   26946 provision.go:84] configureAuth start
	I0930 11:13:47.203986   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:47.204270   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:47.207236   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.207589   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.207618   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.207750   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.210196   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.210560   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.210587   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.210754   26946 provision.go:143] copyHostCerts
	I0930 11:13:47.210783   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:13:47.210816   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:13:47.210826   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:13:47.210895   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:13:47.210966   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:13:47.210983   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:13:47.210989   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:13:47.211013   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:13:47.211059   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:13:47.211076   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:13:47.211082   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:13:47.211104   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:13:47.211150   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m03 san=[127.0.0.1 192.168.39.238 ha-033260-m03 localhost minikube]
	I0930 11:13:47.437398   26946 provision.go:177] copyRemoteCerts
	I0930 11:13:47.437447   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:13:47.437470   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.440541   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.440922   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.440953   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.441156   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.441379   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.441583   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.441760   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:47.524024   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:13:47.524094   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:13:47.548921   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:13:47.548991   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:13:47.573300   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:13:47.573362   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:13:47.597885   26946 provision.go:87] duration metric: took 393.894244ms to configureAuth
	I0930 11:13:47.597913   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:13:47.598137   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:47.598221   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.600783   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.601100   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.601141   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.601308   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.601511   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.601694   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.601837   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.601988   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.602139   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.602153   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:13:47.824726   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:13:47.824757   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:13:47.824767   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetURL
	I0930 11:13:47.826205   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using libvirt version 6000000
	I0930 11:13:47.829313   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.829732   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.829758   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.829979   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:13:47.829995   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:13:47.830002   26946 client.go:171] duration metric: took 29.095541403s to LocalClient.Create
	I0930 11:13:47.830029   26946 start.go:167] duration metric: took 29.095609634s to libmachine.API.Create "ha-033260"
	I0930 11:13:47.830042   26946 start.go:293] postStartSetup for "ha-033260-m03" (driver="kvm2")
	I0930 11:13:47.830059   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:13:47.830080   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:47.830308   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:13:47.830331   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.832443   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.832840   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.832866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.833032   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.833204   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.833336   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.833448   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:47.911982   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:13:47.916413   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:13:47.916434   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:13:47.916512   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:13:47.916604   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:13:47.916615   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:13:47.916726   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:13:47.926360   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:13:47.951398   26946 start.go:296] duration metric: took 121.337458ms for postStartSetup
	I0930 11:13:47.951443   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:47.951959   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:47.954522   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.954882   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.954902   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.955203   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:47.955450   26946 start.go:128] duration metric: took 29.240250665s to createHost
	I0930 11:13:47.955475   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.957714   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.958054   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.958091   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.958262   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.958436   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.958562   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.958708   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.958822   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.958982   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.958994   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:13:48.062976   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694828.042605099
	
	I0930 11:13:48.062999   26946 fix.go:216] guest clock: 1727694828.042605099
	I0930 11:13:48.063009   26946 fix.go:229] Guest: 2024-09-30 11:13:48.042605099 +0000 UTC Remote: 2024-09-30 11:13:47.955462433 +0000 UTC m=+151.020514213 (delta=87.142666ms)
	I0930 11:13:48.063030   26946 fix.go:200] guest clock delta is within tolerance: 87.142666ms
	I0930 11:13:48.063037   26946 start.go:83] releasing machines lock for "ha-033260-m03", held for 29.347943498s
	I0930 11:13:48.063057   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.063295   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:48.065833   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.066130   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.066166   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.068440   26946 out.go:177] * Found network options:
	I0930 11:13:48.070194   26946 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3
	W0930 11:13:48.071578   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:13:48.071602   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:13:48.071621   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072253   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072426   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072506   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:13:48.072552   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:13:48.072605   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:13:48.072630   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:13:48.072698   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:13:48.072719   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:48.075267   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075365   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075641   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.075667   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075715   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.075746   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075778   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:48.075958   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:48.075973   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:48.076123   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:48.076126   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:48.076233   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:48.076311   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:48.076464   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:48.315424   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:13:48.322103   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:13:48.322167   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:13:48.340329   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:13:48.340354   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:13:48.340419   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:13:48.356866   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:13:48.372077   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:13:48.372139   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:13:48.387616   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:13:48.402259   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:13:48.523588   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:13:48.671634   26946 docker.go:233] disabling docker service ...
	I0930 11:13:48.671693   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:13:48.687483   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:13:48.702106   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:13:48.848121   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:13:48.976600   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:13:48.991745   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:13:49.014226   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:13:49.014303   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.025816   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:13:49.025892   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.038153   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.049762   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.061409   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:13:49.073521   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.084788   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.104074   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.116909   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:13:49.129116   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:13:49.129180   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:13:49.143704   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:13:49.155037   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:13:49.274882   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:13:49.369751   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:13:49.369822   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:13:49.375071   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:13:49.375129   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:13:49.379040   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:13:49.421444   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:13:49.421545   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:13:49.450271   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:13:49.481221   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:13:49.482604   26946 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:13:49.483828   26946 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:13:49.485093   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:49.488106   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:49.488528   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:49.488555   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:49.488791   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:13:49.493484   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:13:49.506933   26946 mustload.go:65] Loading cluster: ha-033260
	I0930 11:13:49.507212   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:49.507471   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:49.507506   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:49.522665   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0930 11:13:49.523038   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:49.523529   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:49.523558   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:49.523847   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:49.524064   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:13:49.525464   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:13:49.525875   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:49.525916   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:49.540657   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0930 11:13:49.541129   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:49.541659   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:49.541680   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:49.541991   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:49.542172   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:13:49.542336   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.238
	I0930 11:13:49.542347   26946 certs.go:194] generating shared ca certs ...
	I0930 11:13:49.542362   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.542476   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:13:49.542515   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:13:49.542525   26946 certs.go:256] generating profile certs ...
	I0930 11:13:49.542591   26946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:13:49.542615   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37
	I0930 11:13:49.542628   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:13:49.661476   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 ...
	I0930 11:13:49.661515   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37: {Name:mk149c204bf31f855e781b37ed00d2d45943dc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.661762   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37 ...
	I0930 11:13:49.661785   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37: {Name:mka1c6759c2661bfc3ab07f3168b7da60e9fc340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.661922   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:13:49.662094   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:13:49.662275   26946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:13:49.662294   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:13:49.662313   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:13:49.662333   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:13:49.662351   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:13:49.662368   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:13:49.662384   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:13:49.662452   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:13:49.677713   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:13:49.677801   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:13:49.677835   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:13:49.677845   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:13:49.677866   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:13:49.677888   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:13:49.677908   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:13:49.677944   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:13:49.677971   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:13:49.677983   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:13:49.677997   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:49.678030   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:13:49.681296   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:49.681887   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:13:49.681920   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:49.682144   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:13:49.682365   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:13:49.682543   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:13:49.682691   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:13:49.766051   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:13:49.771499   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:13:49.783878   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:13:49.789403   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:13:49.801027   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:13:49.806774   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:13:49.824334   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:13:49.828617   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:13:49.838958   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:13:49.843225   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:13:49.853655   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:13:49.857681   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:13:49.869752   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:13:49.897794   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:13:49.925363   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:13:49.951437   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:13:49.978863   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:13:50.005498   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:13:50.030426   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:13:50.055825   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:13:50.080625   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:13:50.113315   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:13:50.142931   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:13:50.168186   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:13:50.185792   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:13:50.203667   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:13:50.222202   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:13:50.241795   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:13:50.260704   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:13:50.278865   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:13:50.296763   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:13:50.303234   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:13:50.314412   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.319228   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.319276   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.325090   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:13:50.337510   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:13:50.351103   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.356273   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.356331   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.362227   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:13:50.373066   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:13:50.384243   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.388958   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.389012   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.394820   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:13:50.406295   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:13:50.410622   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:13:50.410674   26946 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.1 crio true true} ...
	I0930 11:13:50.410806   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:13:50.410833   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:13:50.410873   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:13:50.426800   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:13:50.426870   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:13:50.426931   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:13:50.437767   26946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 11:13:50.437827   26946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 11:13:50.448545   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 11:13:50.448565   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 11:13:50.448591   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:13:50.448597   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 11:13:50.448619   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:13:50.448655   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:13:50.448668   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:13:50.448599   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:13:50.460142   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 11:13:50.460178   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 11:13:50.460491   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 11:13:50.460521   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 11:13:50.475258   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:13:50.475370   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:13:50.603685   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 11:13:50.603734   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 11:13:51.331864   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:13:51.343111   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:13:51.361905   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:13:51.380114   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:13:51.398229   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:13:51.402565   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:13:51.414789   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:13:51.547939   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:13:51.568598   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:13:51.569032   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:51.569117   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:51.584541   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0930 11:13:51.585019   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:51.585485   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:51.585506   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:51.585824   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:51.586011   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:13:51.586156   26946 start.go:317] joinCluster: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:13:51.586275   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 11:13:51.586294   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:13:51.589730   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:51.590160   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:13:51.590189   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:51.590326   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:13:51.590673   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:13:51.590813   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:13:51.590943   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:13:51.742155   26946 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:13:51.742217   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ve4s5e.z27uafhrt4vwx76f --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443"
	I0930 11:14:14.534669   26946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ve4s5e.z27uafhrt4vwx76f --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443": (22.792425292s)
	I0930 11:14:14.534703   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 11:14:15.090933   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260-m03 minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=false
	I0930 11:14:15.217971   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-033260-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 11:14:15.356327   26946 start.go:319] duration metric: took 23.770167838s to joinCluster
	I0930 11:14:15.356406   26946 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:14:15.356782   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:14:15.358117   26946 out.go:177] * Verifying Kubernetes components...
	I0930 11:14:15.359571   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:14:15.622789   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:14:15.640897   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:14:15.641233   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:14:15.641327   26946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:14:15.641657   26946 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:14:15.641759   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:15.641771   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:15.641783   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:15.641790   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:15.644778   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:16.142790   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:16.142817   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:16.142829   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:16.142842   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:16.146568   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:16.642107   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:16.642131   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:16.642142   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:16.642147   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:16.648466   26946 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:14:17.142339   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:17.142362   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:17.142375   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:17.142381   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:17.146498   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:17.642900   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:17.642921   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:17.642930   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:17.642934   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:17.646792   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:17.647749   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:18.141856   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:18.141880   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:18.141889   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:18.141893   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:18.145059   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:18.641848   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:18.641883   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:18.641896   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:18.641905   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:18.645609   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:19.142000   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:19.142030   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:19.142041   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:19.142046   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:19.146124   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:19.642709   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:19.642734   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:19.642746   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:19.642751   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:19.647278   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:19.648375   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:20.142851   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:20.142871   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:20.142879   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:20.142883   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:20.146328   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:20.642913   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:20.642940   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:20.642954   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:20.642961   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:20.653974   26946 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:14:21.142909   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:21.142931   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:21.142942   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:21.142954   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:21.146862   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:21.642348   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:21.642373   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:21.642383   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:21.642388   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:21.647699   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:14:22.142178   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:22.142198   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:22.142206   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:22.142210   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:22.145760   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:22.146824   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:22.642895   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:22.642917   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:22.642925   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:22.642931   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:22.648085   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:14:23.141847   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:23.141872   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:23.141883   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:23.141888   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:23.149699   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:23.641992   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:23.642013   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:23.642023   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:23.642029   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:23.645640   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:24.142073   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:24.142096   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:24.142104   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:24.142108   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:24.146322   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:24.146891   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:24.642695   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:24.642716   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:24.642724   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:24.642731   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:24.646216   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:25.142500   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:25.142538   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:25.142546   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:25.142552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:25.146687   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:25.642542   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:25.642566   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:25.642573   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:25.642577   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:25.646661   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:26.142499   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:26.142535   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:26.142545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:26.142552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:26.146202   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:26.147018   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:26.642712   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:26.642739   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:26.642751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:26.642756   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:26.646338   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:27.142246   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:27.142276   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:27.142286   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:27.142292   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:27.146473   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:27.642325   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:27.642347   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:27.642355   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:27.642359   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:27.646109   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:28.142885   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:28.142912   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:28.142923   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:28.142929   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:28.146499   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:28.147250   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:28.642625   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:28.642652   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:28.642663   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:28.642669   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:28.646618   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:29.142391   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:29.142412   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:29.142420   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:29.142424   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:29.146320   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:29.642615   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:29.642640   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:29.642649   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:29.642653   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:29.646130   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.142916   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:30.142938   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:30.142947   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:30.142951   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:30.146109   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.642863   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:30.642885   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:30.642893   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:30.642897   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:30.646458   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.647204   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:31.142601   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:31.142623   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.142631   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.142635   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.146623   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.642077   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:31.642103   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.642114   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.642119   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.645322   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.645964   26946 node_ready.go:49] node "ha-033260-m03" has status "Ready":"True"
	I0930 11:14:31.645987   26946 node_ready.go:38] duration metric: took 16.004306964s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:14:31.645997   26946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:14:31.646075   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:31.646090   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.646099   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.646106   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.653396   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:31.663320   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.663400   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:14:31.663405   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.663412   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.663420   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.666829   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.667522   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.667537   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.667544   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.667550   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.670668   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.671278   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.671301   26946 pod_ready.go:82] duration metric: took 7.951059ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.671309   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.671362   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:14:31.671369   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.671376   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.671383   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.674317   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:31.675093   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.675107   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.675114   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.675120   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.678167   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.678702   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.678717   26946 pod_ready.go:82] duration metric: took 7.402263ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.678725   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.678775   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:14:31.678782   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.678789   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.678794   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.682042   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.683033   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.683050   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.683060   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.683067   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.686124   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.686928   26946 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.686944   26946 pod_ready.go:82] duration metric: took 8.212366ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.686951   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.687047   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:14:31.687059   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.687068   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.687077   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.690190   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.690825   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:31.690840   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.690850   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.690858   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.693597   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:31.694016   26946 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.694032   26946 pod_ready.go:82] duration metric: took 7.073598ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.694050   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.842476   26946 request.go:632] Waited for 148.347924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:14:31.842535   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:14:31.842540   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.842547   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.842551   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.846779   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:32.042378   26946 request.go:632] Waited for 194.977116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:32.042433   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:32.042441   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.042451   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.042460   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.046938   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:32.047883   26946 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.047901   26946 pod_ready.go:82] duration metric: took 353.843104ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.047915   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.242541   26946 request.go:632] Waited for 194.549595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:14:32.242605   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:14:32.242614   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.242625   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.242634   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.246270   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.443112   26946 request.go:632] Waited for 196.194005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:32.443180   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:32.443188   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.443196   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.443204   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.446839   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.447484   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.447503   26946 pod_ready.go:82] duration metric: took 399.580784ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.447514   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.642591   26946 request.go:632] Waited for 194.994624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:14:32.642658   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:14:32.642663   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.642670   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.642674   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.646484   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.842626   26946 request.go:632] Waited for 195.406068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:32.842682   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:32.842700   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.842723   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.842729   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.846693   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.847589   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.847611   26946 pod_ready.go:82] duration metric: took 400.088499ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.847622   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.042743   26946 request.go:632] Waited for 195.040991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:14:33.042794   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:14:33.042810   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.042822   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.042831   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.047437   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:33.242766   26946 request.go:632] Waited for 194.350243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:33.242826   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:33.242831   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.242838   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.242842   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.246530   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.247420   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:33.247442   26946 pod_ready.go:82] duration metric: took 399.811844ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.247458   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.442488   26946 request.go:632] Waited for 194.945176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:14:33.442539   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:14:33.442545   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.442552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.442555   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.446162   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.642540   26946 request.go:632] Waited for 195.369281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:33.642603   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:33.642609   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.642615   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.642620   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.646221   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.646635   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:33.646655   26946 pod_ready.go:82] duration metric: took 399.188776ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.646667   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.843125   26946 request.go:632] Waited for 196.391494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:14:33.843216   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:14:33.843227   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.843238   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.843244   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.846706   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.042579   26946 request.go:632] Waited for 195.024865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.042680   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.042689   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.042697   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.042701   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.046091   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.046788   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.046810   26946 pod_ready.go:82] duration metric: took 400.13538ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.046823   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.242282   26946 request.go:632] Waited for 195.389369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:14:34.242349   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:14:34.242356   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.242365   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.242370   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.246179   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.442166   26946 request.go:632] Waited for 195.280581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:34.442224   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:34.442230   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.442237   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.442240   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.445326   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.445954   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.445978   26946 pod_ready.go:82] duration metric: took 399.145783ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.445991   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.643049   26946 request.go:632] Waited for 196.981464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:14:34.643124   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:14:34.643131   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.643141   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.643148   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.647040   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.843108   26946 request.go:632] Waited for 195.398341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.843190   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.843212   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.843227   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.843238   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.846825   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.847411   26946 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.847432   26946 pod_ready.go:82] duration metric: took 401.432801ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.847445   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.043014   26946 request.go:632] Waited for 195.507309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:14:35.043093   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:14:35.043102   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.043109   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.043117   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.046836   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.242781   26946 request.go:632] Waited for 195.218665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:35.242851   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:35.242856   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.242862   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.242866   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.246468   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.247353   26946 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:35.247380   26946 pod_ready.go:82] duration metric: took 399.923772ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.247393   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.442345   26946 request.go:632] Waited for 194.883869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:14:35.442516   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:14:35.442529   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.442541   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.442550   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.446031   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.642937   26946 request.go:632] Waited for 196.342972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:35.642985   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:35.642990   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.642997   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.643001   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.646624   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.647369   26946 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:35.647389   26946 pod_ready.go:82] duration metric: took 399.989175ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.647398   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.842485   26946 request.go:632] Waited for 195.020246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:14:35.842575   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:14:35.842586   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.842597   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.842605   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.845997   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.043063   26946 request.go:632] Waited for 196.343615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:36.043113   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:36.043119   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.043125   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.043131   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.046327   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.046783   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.046799   26946 pod_ready.go:82] duration metric: took 399.395226ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.046810   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.242936   26946 request.go:632] Waited for 196.062784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:14:36.243003   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:14:36.243024   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.243037   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.243046   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.246888   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.442803   26946 request.go:632] Waited for 195.27104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:36.442859   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:36.442867   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.442877   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.442888   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.446304   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.446972   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.447001   26946 pod_ready.go:82] duration metric: took 400.183775ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.447011   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.642468   26946 request.go:632] Waited for 195.395201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:14:36.642532   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:14:36.642538   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.642545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.642549   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.646175   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.842841   26946 request.go:632] Waited for 195.970164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:36.842911   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:36.842924   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.842938   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.842946   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.846452   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.847134   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.847153   26946 pod_ready.go:82] duration metric: took 400.136505ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.847163   26946 pod_ready.go:39] duration metric: took 5.201155018s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:14:36.847177   26946 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:14:36.847229   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:14:36.869184   26946 api_server.go:72] duration metric: took 21.512734614s to wait for apiserver process to appear ...
	I0930 11:14:36.869210   26946 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:14:36.869231   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:14:36.875656   26946 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:14:36.875723   26946 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:14:36.875730   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.875741   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.875751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.876680   26946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:14:36.876763   26946 api_server.go:141] control plane version: v1.31.1
	I0930 11:14:36.876785   26946 api_server.go:131] duration metric: took 7.567961ms to wait for apiserver health ...
	I0930 11:14:36.876795   26946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:14:37.042474   26946 request.go:632] Waited for 165.583212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.042549   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.042557   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.042568   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.042577   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.049247   26946 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:14:37.056036   26946 system_pods.go:59] 24 kube-system pods found
	I0930 11:14:37.056063   26946 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:14:37.056069   26946 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:14:37.056073   26946 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:14:37.056076   26946 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:14:37.056079   26946 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:14:37.056082   26946 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:14:37.056085   26946 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:14:37.056088   26946 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:14:37.056091   26946 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:14:37.056094   26946 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:14:37.056097   26946 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:14:37.056100   26946 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:14:37.056105   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:14:37.056108   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:14:37.056111   26946 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:14:37.056115   26946 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:14:37.056120   26946 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:14:37.056151   26946 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:14:37.056164   26946 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:14:37.056169   26946 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:14:37.056177   26946 system_pods.go:61] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:14:37.056182   26946 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:14:37.056189   26946 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:14:37.056194   26946 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:14:37.056204   26946 system_pods.go:74] duration metric: took 179.399341ms to wait for pod list to return data ...
	I0930 11:14:37.056216   26946 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:14:37.242741   26946 request.go:632] Waited for 186.4192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:14:37.242795   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:14:37.242800   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.242807   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.242813   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.247153   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:37.247269   26946 default_sa.go:45] found service account: "default"
	I0930 11:14:37.247285   26946 default_sa.go:55] duration metric: took 191.060236ms for default service account to be created ...
	I0930 11:14:37.247292   26946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:14:37.442756   26946 request.go:632] Waited for 195.39174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.442830   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.442840   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.442850   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.442861   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.450094   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:37.457440   26946 system_pods.go:86] 24 kube-system pods found
	I0930 11:14:37.457477   26946 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:14:37.457485   26946 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:14:37.457491   26946 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:14:37.457497   26946 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:14:37.457506   26946 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:14:37.457512   26946 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:14:37.457518   26946 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:14:37.457524   26946 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:14:37.457530   26946 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:14:37.457538   26946 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:14:37.457547   26946 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:14:37.457553   26946 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:14:37.457562   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:14:37.457569   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:14:37.457575   26946 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:14:37.457584   26946 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:14:37.457590   26946 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:14:37.457597   26946 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:14:37.457603   26946 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:14:37.457612   26946 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:14:37.457630   26946 system_pods.go:89] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:14:37.457637   26946 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:14:37.457643   26946 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:14:37.457648   26946 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:14:37.457657   26946 system_pods.go:126] duration metric: took 210.359061ms to wait for k8s-apps to be running ...
	I0930 11:14:37.457669   26946 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:14:37.457721   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:14:37.476929   26946 system_svc.go:56] duration metric: took 19.252575ms WaitForService to wait for kubelet
	I0930 11:14:37.476958   26946 kubeadm.go:582] duration metric: took 22.120515994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:14:37.476982   26946 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:14:37.642377   26946 request.go:632] Waited for 165.309074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:14:37.642424   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:14:37.642429   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.642438   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.642449   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.646747   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:37.647864   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647885   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647896   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647900   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647904   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647908   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647912   26946 node_conditions.go:105] duration metric: took 170.925329ms to run NodePressure ...
	I0930 11:14:37.647922   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:14:37.647945   26946 start.go:255] writing updated cluster config ...
	I0930 11:14:37.648212   26946 ssh_runner.go:195] Run: rm -f paused
	I0930 11:14:37.699426   26946 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:14:37.701518   26946 out.go:177] * Done! kubectl is now configured to use "ha-033260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.617324073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695095617302226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6a08c13-8990-4d98-b984-40b7a175d54e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.617941634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f49d40cf-1c30-43ce-93ac-2b12fdd2b55e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.618004340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f49d40cf-1c30-43ce-93ac-2b12fdd2b55e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.618245262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f49d40cf-1c30-43ce-93ac-2b12fdd2b55e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.658043988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74deb4af-02e5-4e2a-b6ad-4849d7ee3570 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.658122694Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74deb4af-02e5-4e2a-b6ad-4849d7ee3570 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.659430208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=012c177f-5e41-461c-8f55-d7d6761e9581 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.659933978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695095659908850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=012c177f-5e41-461c-8f55-d7d6761e9581 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.660481536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a4f9679-42d9-451d-96e1-d6c329f0622d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.660537065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a4f9679-42d9-451d-96e1-d6c329f0622d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.661514982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a4f9679-42d9-451d-96e1-d6c329f0622d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.703871312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b8b85c0-cf63-4bac-b182-002cf991510b name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.703951780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b8b85c0-cf63-4bac-b182-002cf991510b name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.705548885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=690a4d48-1143-46c9-9bbe-d909f7dd1770 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.706036221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695095706005388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=690a4d48-1143-46c9-9bbe-d909f7dd1770 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.706609185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=305721fd-ae9c-466a-a180-e685364ef5cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.706725734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=305721fd-ae9c-466a-a180-e685364ef5cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.706983739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=305721fd-ae9c-466a-a180-e685364ef5cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.746022471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c29d4b1-3fd4-4f98-a872-704bd3fba730 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.746230911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c29d4b1-3fd4-4f98-a872-704bd3fba730 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.747802019Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=701f4c21-e152-46dc-ad89-06ce66da9e11 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.748293318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695095748262276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=701f4c21-e152-46dc-ad89-06ce66da9e11 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.749109295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a1a4af3-9972-452e-9130-2c7ad134b75e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.749183870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a1a4af3-9972-452e-9130-2c7ad134b75e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:15 ha-033260 crio[660]: time="2024-09-30 11:18:15.749499726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a1a4af3-9972-452e-9130-2c7ad134b75e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	970aed3b1f96b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e5a4e140afd6a       busybox-7dff88458-nbhwc
	856f46390ed07       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   ee2a6eb69b10a       coredns-7c65d6cfc9-kt87v
	f612e29e1b4eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   571ace347c86d       storage-provisioner
	2aac013f37bf9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   724d02dce7a0d       coredns-7c65d6cfc9-5frmm
	347597ebf9b20       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   b08b772dab41d       kube-proxy-mxvxr
	6cf899810e161       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   b2990036962da       kindnet-g94k6
	7a9e01197e5c6       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2bd722c6afa63       kube-vip-ha-033260
	aa8ecc81d0af2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   f789f882a4d3c       etcd-ha-033260
	e62c0a6cc031f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6bdfa51706557       kube-controller-manager-ha-033260
	2435a21a0f6f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   fd27dbf29ee9b       kube-scheduler-ha-033260
	cd2027f0a04e1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   676d3fbaf3e6f       kube-apiserver-ha-033260
	
	
	==> coredns [2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7] <==
	[INFO] 10.244.1.2:53856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00078279s
	[INFO] 10.244.0.4:40457 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001984462s
	[INFO] 10.244.2.2:53822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006986108s
	[INFO] 10.244.2.2:56668 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001677174s
	[INFO] 10.244.1.2:39538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172765s
	[INFO] 10.244.1.2:52635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028646205s
	[INFO] 10.244.1.2:41853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176274s
	[INFO] 10.244.1.2:35962 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170835s
	[INFO] 10.244.0.4:41550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130972s
	[INFO] 10.244.0.4:32938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173381s
	[INFO] 10.244.0.4:56409 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073902s
	[INFO] 10.244.2.2:58163 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268677s
	[INFO] 10.244.2.2:36365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010796s
	[INFO] 10.244.2.2:56656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115088s
	[INFO] 10.244.2.2:56306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139171s
	[INFO] 10.244.1.2:35824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200215s
	[INFO] 10.244.1.2:55897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096777s
	[INFO] 10.244.1.2:41692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109849s
	[INFO] 10.244.0.4:40290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106794s
	[INFO] 10.244.0.4:46779 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132069s
	[INFO] 10.244.1.2:51125 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201243s
	[INFO] 10.244.1.2:54698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184568s
	[INFO] 10.244.0.4:53882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193917s
	[INFO] 10.244.0.4:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121126s
	[INFO] 10.244.2.2:58238 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117978s
	
	
	==> coredns [856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0] <==
	[INFO] 10.244.1.2:57277 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000266561s
	[INFO] 10.244.1.2:48530 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000385853s
	[INFO] 10.244.0.4:37489 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002109336s
	[INFO] 10.244.0.4:53881 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132699s
	[INFO] 10.244.0.4:35131 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120989s
	[INFO] 10.244.0.4:53761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001344827s
	[INFO] 10.244.0.4:59481 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051804s
	[INFO] 10.244.2.2:39523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137336s
	[INFO] 10.244.2.2:35477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002190323s
	[INFO] 10.244.2.2:37515 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001525471s
	[INFO] 10.244.2.2:34201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119381s
	[INFO] 10.244.1.2:42886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230949s
	[INFO] 10.244.0.4:43156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079033s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010674s
	[INFO] 10.244.2.2:47730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245903s
	[INFO] 10.244.2.2:54559 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165285s
	[INFO] 10.244.2.2:56225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115859s
	[INFO] 10.244.2.2:54334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001069s
	[INFO] 10.244.1.2:43809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130742s
	[INFO] 10.244.1.2:56685 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199555s
	[INFO] 10.244.0.4:44188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154269s
	[INFO] 10.244.0.4:56530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138351s
	[INFO] 10.244.2.2:34814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138709s
	[INFO] 10.244.2.2:49549 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124443s
	[INFO] 10.244.2.2:35669 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100712s
	
	
	==> describe nodes <==
	Name:               ha-033260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:12:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-033260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 285e64dc8d10442694303513a400e333
	  System UUID:                285e64dc-8d10-4426-9430-3513a400e333
	  Boot ID:                    e1ab2d78-3004-455b-b8b3-86a48689299f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbhwc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-5frmm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m12s
	  kube-system                 coredns-7c65d6cfc9-kt87v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m12s
	  kube-system                 etcd-ha-033260                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-g94k6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m12s
	  kube-system                 kube-apiserver-ha-033260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-ha-033260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-mxvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-ha-033260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-033260                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m9s   kube-proxy       
	  Normal  Starting                 6m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m17s  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m13s  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-033260 status is now: NodeReady
	  Normal  RegisteredNode           5m13s  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           3m56s  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	
	
	Name:               ha-033260-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:12:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:15:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-033260-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1504aa96b0e7414e83ec57ce754ea274
	  System UUID:                1504aa96-b0e7-414e-83ec-57ce754ea274
	  Boot ID:                    08e05cdc-874f-4f82-99d4-84bb26fd07ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-748nr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-033260-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m20s
	  kube-system                 kindnet-752cm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m22s
	  kube-system                 kube-apiserver-ha-033260-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-controller-manager-ha-033260-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-proxy-fckwn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-ha-033260-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-vip-ha-033260-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-033260-m02 status is now: NodeNotReady
	
	
	Name:               ha-033260-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-033260-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 581b37e2b76245bf813ddd1801a6b9a3
	  System UUID:                581b37e2-b762-45bf-813d-dd1801a6b9a3
	  Boot ID:                    92c7790b-7ee9-43e4-b1b8-fd69ae5fa989
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkczc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-033260-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m3s
	  kube-system                 kindnet-4rpgw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m5s
	  kube-system                 kube-apiserver-ha-033260-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-controller-manager-ha-033260-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-proxy-fctld                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-ha-033260-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-vip-ha-033260-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal  RegisteredNode           3m56s                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	
	
	Name:               ha-033260-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-033260-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f7e5ab5969e49808de6a4938b82b604
	  System UUID:                3f7e5ab5-969e-4980-8de6-a4938b82b604
	  Boot ID:                    15a5a2bf-b69b-4b89-b5f2-f6529ae084b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kb2cp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-cr58q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-033260-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 11:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050905] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040385] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.839402] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.653040] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.597753] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.651623] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058580] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170861] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.144465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.293344] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.055212] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.356595] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.065791] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.315036] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.090322] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 11:12] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.137075] kauditd_printk_skb: 38 callbacks suppressed
	[Sep30 11:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8] <==
	{"level":"warn","ts":"2024-09-30T11:18:15.650603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:15.751289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:15.808718Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:15.810705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:15.850840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:15.951322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.064326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.073112Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.083882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.088245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.091800Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.097487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.103978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.110558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.114829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.118182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.127906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.134890Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.142041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.147837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.150742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.152753Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.156357Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.162397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:16.169235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:18:16 up 6 min,  0 users,  load average: 0.33, 0.18, 0.08
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346] <==
	I0930 11:17:37.855586       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:17:47.860490       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:17:47.860600       1 main.go:299] handling current node
	I0930 11:17:47.860694       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:17:47.860727       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:17:47.860874       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:17:47.860897       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:17:47.860960       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:17:47.860978       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:17:57.862776       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:17:57.862852       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:17:57.862995       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:17:57.863020       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:17:57.863078       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:17:57.863084       1 main.go:299] handling current node
	I0930 11:17:57.863098       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:17:57.863102       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:07.854593       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:18:07.854770       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:07.854951       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:18:07.854979       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:18:07.855034       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:18:07.855052       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:18:07.855106       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:18:07.855130       1 main.go:299] handling current node
	
	
	==> kube-apiserver [cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac] <==
	I0930 11:11:58.463989       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0930 11:11:58.477865       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249]
	I0930 11:11:58.479372       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 11:11:58.487328       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 11:11:58.586099       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 11:11:59.517972       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 11:11:59.542879       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 11:11:59.558820       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 11:12:04.282712       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0930 11:12:04.376507       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0930 11:14:41.794861       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58556: use of closed network connection
	E0930 11:14:41.976585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58584: use of closed network connection
	E0930 11:14:42.175263       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58602: use of closed network connection
	E0930 11:14:42.398453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58626: use of closed network connection
	E0930 11:14:42.598999       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58646: use of closed network connection
	E0930 11:14:42.786264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58670: use of closed network connection
	E0930 11:14:42.985795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58688: use of closed network connection
	E0930 11:14:43.164451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58700: use of closed network connection
	E0930 11:14:43.352582       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58708: use of closed network connection
	E0930 11:14:43.634509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58726: use of closed network connection
	E0930 11:14:43.812335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58746: use of closed network connection
	E0930 11:14:44.006684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58766: use of closed network connection
	E0930 11:14:44.194031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58782: use of closed network connection
	E0930 11:14:44.561371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58814: use of closed network connection
	W0930 11:16:08.485734       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.249]
	
	
	==> kube-controller-manager [e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489] <==
	I0930 11:15:14.593101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.593158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.605401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.879876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:15.297330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:16.002721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.158455       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.429273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.922721       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-033260-m04"
	I0930 11:15:18.922856       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:19.229459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:24.734460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:34.561602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:34.561906       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:15:34.575771       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:35.966445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:45.204985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:16:30.993129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:30.994314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:16:31.023898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:31.050052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.150574ms"
	I0930 11:16:31.050219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.36µs"
	I0930 11:16:31.218479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:34.045967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:36.316239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	
	
	==> kube-proxy [347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:12:06.949025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:12:06.986064       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0930 11:12:06.986193       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:12:07.041171       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:12:07.041238       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:12:07.041262       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:12:07.044020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:12:07.044727       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:12:07.044757       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:12:07.047853       1 config.go:199] "Starting service config controller"
	I0930 11:12:07.048187       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:12:07.048613       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:12:07.048700       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:12:07.051971       1 config.go:328] "Starting node config controller"
	I0930 11:12:07.052033       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:12:07.148982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:12:07.149026       1 shared_informer.go:320] Caches are synced for service config
	I0930 11:12:07.152927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2] <==
	I0930 11:11:59.743507       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:14:38.641000       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkczc\": pod busybox-7dff88458-rkczc is already assigned to node \"ha-033260-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rkczc" node="ha-033260-m03"
	E0930 11:14:38.642588       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 12532e14-b4c0-4c7d-ab93-e96698fbc986(default/busybox-7dff88458-rkczc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rkczc"
	E0930 11:14:38.642720       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkczc\": pod busybox-7dff88458-rkczc is already assigned to node \"ha-033260-m03\"" pod="default/busybox-7dff88458-rkczc"
	I0930 11:14:38.642772       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rkczc" node="ha-033260-m03"
	E0930 11:14:38.700019       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nbhwc\": pod busybox-7dff88458-nbhwc is already assigned to node \"ha-033260\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nbhwc" node="ha-033260"
	E0930 11:14:38.700408       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e62e1e44-3723-496c-85a3-7a79e9c8264b(default/busybox-7dff88458-nbhwc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nbhwc"
	E0930 11:14:38.700579       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nbhwc\": pod busybox-7dff88458-nbhwc is already assigned to node \"ha-033260\"" pod="default/busybox-7dff88458-nbhwc"
	I0930 11:14:38.700685       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nbhwc" node="ha-033260"
	E0930 11:14:38.701396       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-748nr\": pod busybox-7dff88458-748nr is already assigned to node \"ha-033260-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-748nr" node="ha-033260-m02"
	E0930 11:14:38.701487       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 004c0140-b81f-4e7b-aa0d-0aa6f7403351(default/busybox-7dff88458-748nr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-748nr"
	E0930 11:14:38.701528       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-748nr\": pod busybox-7dff88458-748nr is already assigned to node \"ha-033260-m02\"" pod="default/busybox-7dff88458-748nr"
	I0930 11:14:38.701566       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-748nr" node="ha-033260-m02"
	E0930 11:15:14.650435       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mkbm9\": pod kube-proxy-mkbm9 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mkbm9" node="ha-033260-m04"
	E0930 11:15:14.650543       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mkbm9\": pod kube-proxy-mkbm9 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-mkbm9"
	E0930 11:15:14.687957       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.688017       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c071322f-794b-4d6f-a33a-92077352ef5d(kube-system/kindnet-kb2cp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kb2cp"
	E0930 11:15:14.688032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-kb2cp"
	I0930 11:15:14.688047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.701899       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nbts6" node="ha-033260-m04"
	E0930 11:15:14.702003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-nbts6"
	E0930 11:15:14.702565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:15:14.705542       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2de7434-03f1-4bbc-ab62-3101483908c1(kube-system/kube-proxy-cr58q) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-cr58q"
	E0930 11:15:14.705602       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-cr58q"
	I0930 11:15:14.705671       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	
	
	==> kubelet <==
	Sep 30 11:16:59 ha-033260 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:16:59 ha-033260 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:16:59 ha-033260 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:16:59 ha-033260 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:16:59 ha-033260 kubelet[1307]: E0930 11:16:59.603405    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695019602992032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:16:59 ha-033260 kubelet[1307]: E0930 11:16:59.603474    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695019602992032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:09 ha-033260 kubelet[1307]: E0930 11:17:09.605544    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695029605156885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:09 ha-033260 kubelet[1307]: E0930 11:17:09.605573    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695029605156885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:19 ha-033260 kubelet[1307]: E0930 11:17:19.607869    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695039607317316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:19 ha-033260 kubelet[1307]: E0930 11:17:19.608153    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695039607317316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:29 ha-033260 kubelet[1307]: E0930 11:17:29.611241    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695049610444192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:29 ha-033260 kubelet[1307]: E0930 11:17:29.611290    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695049610444192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:39 ha-033260 kubelet[1307]: E0930 11:17:39.612829    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695059612275436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:39 ha-033260 kubelet[1307]: E0930 11:17:39.613366    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695059612275436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:49 ha-033260 kubelet[1307]: E0930 11:17:49.615817    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695069615300757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:49 ha-033260 kubelet[1307]: E0930 11:17:49.616359    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695069615300757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.469234    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:17:59 ha-033260 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.620277    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695079619430930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.620330    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695079619430930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:09 ha-033260 kubelet[1307]: E0930 11:18:09.622386    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695089621956899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:09 ha-033260 kubelet[1307]: E0930 11:18:09.622824    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695089621956899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:261: (dbg) Run:  kubectl --context ha-033260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.394195173s)
ha_test.go:413: expected profile "ha-033260" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-033260\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-033260\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-033260\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.238\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.104\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"
metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\"
:262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.354715684s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m03_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:11:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:11:16.968147   26946 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:11:16.968259   26946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:11:16.968268   26946 out.go:358] Setting ErrFile to fd 2...
	I0930 11:11:16.968272   26946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:11:16.968475   26946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:11:16.969014   26946 out.go:352] Setting JSON to false
	I0930 11:11:16.969874   26946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3224,"bootTime":1727691453,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:11:16.969971   26946 start.go:139] virtualization: kvm guest
	I0930 11:11:16.972340   26946 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:11:16.973700   26946 notify.go:220] Checking for updates...
	I0930 11:11:16.973712   26946 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:11:16.975164   26946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:11:16.976567   26946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:11:16.977791   26946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:16.978971   26946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:11:16.980151   26946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:11:16.981437   26946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:11:17.016837   26946 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 11:11:17.017911   26946 start.go:297] selected driver: kvm2
	I0930 11:11:17.017921   26946 start.go:901] validating driver "kvm2" against <nil>
	I0930 11:11:17.017932   26946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:11:17.018657   26946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:11:17.018742   26946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:11:17.034306   26946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:11:17.034349   26946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 11:11:17.034586   26946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:11:17.034614   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:11:17.034651   26946 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 11:11:17.034662   26946 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 11:11:17.034717   26946 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0930 11:11:17.034818   26946 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:11:17.036732   26946 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:11:17.037780   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:11:17.037816   26946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:11:17.037823   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:11:17.037892   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:11:17.037903   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:11:17.038215   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:11:17.038236   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json: {Name:mkb40a3a18f0ab7d52c306f0204aa0e145307acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:17.038367   26946 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:11:17.038394   26946 start.go:364] duration metric: took 15.009µs to acquireMachinesLock for "ha-033260"
	I0930 11:11:17.038414   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:11:17.038466   26946 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 11:11:17.039863   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:11:17.039975   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:11:17.040024   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:11:17.054681   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0930 11:11:17.055106   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:11:17.055654   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:11:17.055673   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:11:17.056010   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:11:17.056264   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:17.056403   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:17.056571   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:11:17.056596   26946 client.go:168] LocalClient.Create starting
	I0930 11:11:17.056623   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:11:17.056664   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:11:17.056676   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:11:17.056725   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:11:17.056743   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:11:17.056752   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:11:17.056765   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:11:17.056773   26946 main.go:141] libmachine: (ha-033260) Calling .PreCreateCheck
	I0930 11:11:17.057093   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:17.057527   26946 main.go:141] libmachine: Creating machine...
	I0930 11:11:17.057540   26946 main.go:141] libmachine: (ha-033260) Calling .Create
	I0930 11:11:17.057672   26946 main.go:141] libmachine: (ha-033260) Creating KVM machine...
	I0930 11:11:17.058923   26946 main.go:141] libmachine: (ha-033260) DBG | found existing default KVM network
	I0930 11:11:17.059559   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.059428   26970 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I0930 11:11:17.059596   26946 main.go:141] libmachine: (ha-033260) DBG | created network xml: 
	I0930 11:11:17.059615   26946 main.go:141] libmachine: (ha-033260) DBG | <network>
	I0930 11:11:17.059621   26946 main.go:141] libmachine: (ha-033260) DBG |   <name>mk-ha-033260</name>
	I0930 11:11:17.059629   26946 main.go:141] libmachine: (ha-033260) DBG |   <dns enable='no'/>
	I0930 11:11:17.059635   26946 main.go:141] libmachine: (ha-033260) DBG |   
	I0930 11:11:17.059640   26946 main.go:141] libmachine: (ha-033260) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 11:11:17.059646   26946 main.go:141] libmachine: (ha-033260) DBG |     <dhcp>
	I0930 11:11:17.059651   26946 main.go:141] libmachine: (ha-033260) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 11:11:17.059658   26946 main.go:141] libmachine: (ha-033260) DBG |     </dhcp>
	I0930 11:11:17.059663   26946 main.go:141] libmachine: (ha-033260) DBG |   </ip>
	I0930 11:11:17.059667   26946 main.go:141] libmachine: (ha-033260) DBG |   
	I0930 11:11:17.059673   26946 main.go:141] libmachine: (ha-033260) DBG | </network>
	I0930 11:11:17.059679   26946 main.go:141] libmachine: (ha-033260) DBG | 
	I0930 11:11:17.064624   26946 main.go:141] libmachine: (ha-033260) DBG | trying to create private KVM network mk-ha-033260 192.168.39.0/24...
	I0930 11:11:17.128145   26946 main.go:141] libmachine: (ha-033260) DBG | private KVM network mk-ha-033260 192.168.39.0/24 created
	I0930 11:11:17.128172   26946 main.go:141] libmachine: (ha-033260) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 ...
	I0930 11:11:17.128183   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.128100   26970 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:17.128201   26946 main.go:141] libmachine: (ha-033260) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:11:17.128218   26946 main.go:141] libmachine: (ha-033260) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:11:17.365994   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.365804   26970 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa...
	I0930 11:11:17.493008   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.492862   26970 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/ha-033260.rawdisk...
	I0930 11:11:17.493034   26946 main.go:141] libmachine: (ha-033260) DBG | Writing magic tar header
	I0930 11:11:17.493046   26946 main.go:141] libmachine: (ha-033260) DBG | Writing SSH key tar header
	I0930 11:11:17.493053   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.492975   26970 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 ...
	I0930 11:11:17.493066   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260
	I0930 11:11:17.493124   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 (perms=drwx------)
	I0930 11:11:17.493158   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:11:17.493173   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:11:17.493181   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:11:17.493193   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:11:17.493202   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:11:17.493226   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:17.493246   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:11:17.493258   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:11:17.493264   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:11:17.493275   26946 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:11:17.493280   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:11:17.493286   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home
	I0930 11:11:17.493291   26946 main.go:141] libmachine: (ha-033260) DBG | Skipping /home - not owner
	I0930 11:11:17.494319   26946 main.go:141] libmachine: (ha-033260) define libvirt domain using xml: 
	I0930 11:11:17.494340   26946 main.go:141] libmachine: (ha-033260) <domain type='kvm'>
	I0930 11:11:17.494347   26946 main.go:141] libmachine: (ha-033260)   <name>ha-033260</name>
	I0930 11:11:17.494351   26946 main.go:141] libmachine: (ha-033260)   <memory unit='MiB'>2200</memory>
	I0930 11:11:17.494356   26946 main.go:141] libmachine: (ha-033260)   <vcpu>2</vcpu>
	I0930 11:11:17.494359   26946 main.go:141] libmachine: (ha-033260)   <features>
	I0930 11:11:17.494365   26946 main.go:141] libmachine: (ha-033260)     <acpi/>
	I0930 11:11:17.494370   26946 main.go:141] libmachine: (ha-033260)     <apic/>
	I0930 11:11:17.494377   26946 main.go:141] libmachine: (ha-033260)     <pae/>
	I0930 11:11:17.494399   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494410   26946 main.go:141] libmachine: (ha-033260)   </features>
	I0930 11:11:17.494415   26946 main.go:141] libmachine: (ha-033260)   <cpu mode='host-passthrough'>
	I0930 11:11:17.494422   26946 main.go:141] libmachine: (ha-033260)   
	I0930 11:11:17.494425   26946 main.go:141] libmachine: (ha-033260)   </cpu>
	I0930 11:11:17.494429   26946 main.go:141] libmachine: (ha-033260)   <os>
	I0930 11:11:17.494433   26946 main.go:141] libmachine: (ha-033260)     <type>hvm</type>
	I0930 11:11:17.494461   26946 main.go:141] libmachine: (ha-033260)     <boot dev='cdrom'/>
	I0930 11:11:17.494487   26946 main.go:141] libmachine: (ha-033260)     <boot dev='hd'/>
	I0930 11:11:17.494498   26946 main.go:141] libmachine: (ha-033260)     <bootmenu enable='no'/>
	I0930 11:11:17.494504   26946 main.go:141] libmachine: (ha-033260)   </os>
	I0930 11:11:17.494511   26946 main.go:141] libmachine: (ha-033260)   <devices>
	I0930 11:11:17.494518   26946 main.go:141] libmachine: (ha-033260)     <disk type='file' device='cdrom'>
	I0930 11:11:17.494529   26946 main.go:141] libmachine: (ha-033260)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/boot2docker.iso'/>
	I0930 11:11:17.494540   26946 main.go:141] libmachine: (ha-033260)       <target dev='hdc' bus='scsi'/>
	I0930 11:11:17.494547   26946 main.go:141] libmachine: (ha-033260)       <readonly/>
	I0930 11:11:17.494558   26946 main.go:141] libmachine: (ha-033260)     </disk>
	I0930 11:11:17.494568   26946 main.go:141] libmachine: (ha-033260)     <disk type='file' device='disk'>
	I0930 11:11:17.494579   26946 main.go:141] libmachine: (ha-033260)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:11:17.494592   26946 main.go:141] libmachine: (ha-033260)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/ha-033260.rawdisk'/>
	I0930 11:11:17.494603   26946 main.go:141] libmachine: (ha-033260)       <target dev='hda' bus='virtio'/>
	I0930 11:11:17.494611   26946 main.go:141] libmachine: (ha-033260)     </disk>
	I0930 11:11:17.494625   26946 main.go:141] libmachine: (ha-033260)     <interface type='network'>
	I0930 11:11:17.494636   26946 main.go:141] libmachine: (ha-033260)       <source network='mk-ha-033260'/>
	I0930 11:11:17.494646   26946 main.go:141] libmachine: (ha-033260)       <model type='virtio'/>
	I0930 11:11:17.494655   26946 main.go:141] libmachine: (ha-033260)     </interface>
	I0930 11:11:17.494664   26946 main.go:141] libmachine: (ha-033260)     <interface type='network'>
	I0930 11:11:17.494672   26946 main.go:141] libmachine: (ha-033260)       <source network='default'/>
	I0930 11:11:17.494682   26946 main.go:141] libmachine: (ha-033260)       <model type='virtio'/>
	I0930 11:11:17.494731   26946 main.go:141] libmachine: (ha-033260)     </interface>
	I0930 11:11:17.494748   26946 main.go:141] libmachine: (ha-033260)     <serial type='pty'>
	I0930 11:11:17.494754   26946 main.go:141] libmachine: (ha-033260)       <target port='0'/>
	I0930 11:11:17.494763   26946 main.go:141] libmachine: (ha-033260)     </serial>
	I0930 11:11:17.494791   26946 main.go:141] libmachine: (ha-033260)     <console type='pty'>
	I0930 11:11:17.494813   26946 main.go:141] libmachine: (ha-033260)       <target type='serial' port='0'/>
	I0930 11:11:17.494833   26946 main.go:141] libmachine: (ha-033260)     </console>
	I0930 11:11:17.494851   26946 main.go:141] libmachine: (ha-033260)     <rng model='virtio'>
	I0930 11:11:17.494868   26946 main.go:141] libmachine: (ha-033260)       <backend model='random'>/dev/random</backend>
	I0930 11:11:17.494879   26946 main.go:141] libmachine: (ha-033260)     </rng>
	I0930 11:11:17.494884   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494894   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494900   26946 main.go:141] libmachine: (ha-033260)   </devices>
	I0930 11:11:17.494910   26946 main.go:141] libmachine: (ha-033260) </domain>
	I0930 11:11:17.494919   26946 main.go:141] libmachine: (ha-033260) 
	I0930 11:11:17.499284   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:1e:fd:d9 in network default
	I0930 11:11:17.499904   26946 main.go:141] libmachine: (ha-033260) Ensuring networks are active...
	I0930 11:11:17.499920   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:17.500618   26946 main.go:141] libmachine: (ha-033260) Ensuring network default is active
	I0930 11:11:17.501042   26946 main.go:141] libmachine: (ha-033260) Ensuring network mk-ha-033260 is active
	I0930 11:11:17.501643   26946 main.go:141] libmachine: (ha-033260) Getting domain xml...
	I0930 11:11:17.502369   26946 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:11:18.692089   26946 main.go:141] libmachine: (ha-033260) Waiting to get IP...
	I0930 11:11:18.692860   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:18.693297   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:18.693313   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:18.693260   26970 retry.go:31] will retry after 231.51107ms: waiting for machine to come up
	I0930 11:11:18.926878   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:18.927339   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:18.927367   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:18.927281   26970 retry.go:31] will retry after 238.29389ms: waiting for machine to come up
	I0930 11:11:19.167097   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.167813   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.167841   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.167759   26970 retry.go:31] will retry after 304.46036ms: waiting for machine to come up
	I0930 11:11:19.474179   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.474648   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.474678   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.474604   26970 retry.go:31] will retry after 472.499674ms: waiting for machine to come up
	I0930 11:11:19.948108   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.948622   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.948649   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.948597   26970 retry.go:31] will retry after 645.07677ms: waiting for machine to come up
	I0930 11:11:20.595504   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:20.595963   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:20.595984   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:20.595941   26970 retry.go:31] will retry after 894.966176ms: waiting for machine to come up
	I0930 11:11:21.492428   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:21.492831   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:21.492882   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:21.492814   26970 retry.go:31] will retry after 848.859093ms: waiting for machine to come up
	I0930 11:11:22.343403   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:22.343835   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:22.343861   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:22.343753   26970 retry.go:31] will retry after 1.05973931s: waiting for machine to come up
	I0930 11:11:23.404961   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:23.405359   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:23.405385   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:23.405316   26970 retry.go:31] will retry after 1.638432323s: waiting for machine to come up
	I0930 11:11:25.046055   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:25.046452   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:25.046477   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:25.046405   26970 retry.go:31] will retry after 2.080958051s: waiting for machine to come up
	I0930 11:11:27.128708   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:27.129133   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:27.129156   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:27.129053   26970 retry.go:31] will retry after 2.256414995s: waiting for machine to come up
	I0930 11:11:29.387356   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:29.387768   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:29.387788   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:29.387745   26970 retry.go:31] will retry after 3.372456281s: waiting for machine to come up
	I0930 11:11:32.761875   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:32.762235   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:32.762254   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:32.762202   26970 retry.go:31] will retry after 3.757571385s: waiting for machine to come up
	I0930 11:11:36.524130   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:36.524597   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:36.524613   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:36.524548   26970 retry.go:31] will retry after 4.081097536s: waiting for machine to come up
	I0930 11:11:40.609929   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.610386   26946 main.go:141] libmachine: (ha-033260) Found IP for machine: 192.168.39.249
	I0930 11:11:40.610415   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has current primary IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.610423   26946 main.go:141] libmachine: (ha-033260) Reserving static IP address...
	I0930 11:11:40.610796   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"} in network mk-ha-033260
	I0930 11:11:40.682058   26946 main.go:141] libmachine: (ha-033260) DBG | Getting to WaitForSSH function...
	I0930 11:11:40.682112   26946 main.go:141] libmachine: (ha-033260) Reserved static IP address: 192.168.39.249
	I0930 11:11:40.682151   26946 main.go:141] libmachine: (ha-033260) Waiting for SSH to be available...
	I0930 11:11:40.684625   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.684964   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.684990   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.685088   26946 main.go:141] libmachine: (ha-033260) DBG | Using SSH client type: external
	I0930 11:11:40.685108   26946 main.go:141] libmachine: (ha-033260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa (-rw-------)
	I0930 11:11:40.685155   26946 main.go:141] libmachine: (ha-033260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:11:40.685168   26946 main.go:141] libmachine: (ha-033260) DBG | About to run SSH command:
	I0930 11:11:40.685196   26946 main.go:141] libmachine: (ha-033260) DBG | exit 0
	I0930 11:11:40.813832   26946 main.go:141] libmachine: (ha-033260) DBG | SSH cmd err, output: <nil>: 
	I0930 11:11:40.814089   26946 main.go:141] libmachine: (ha-033260) KVM machine creation complete!
	I0930 11:11:40.814483   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:40.815001   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:40.815218   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:40.815362   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:11:40.815373   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:11:40.816691   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:11:40.816703   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:11:40.816707   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:11:40.816712   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:40.818838   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.819210   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.819240   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.819306   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:40.819465   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.819601   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.819739   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:40.819883   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:40.820061   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:40.820071   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:11:40.929008   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:11:40.929033   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:11:40.929040   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:40.931913   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.932264   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.932308   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.932448   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:40.932679   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.932816   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.932931   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:40.933122   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:40.933283   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:40.933295   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:11:41.042597   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:11:41.042675   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:11:41.042682   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:11:41.042689   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.042906   26946 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:11:41.042918   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.043088   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.045281   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.045591   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.045634   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.045749   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.045916   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.046048   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.046166   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.046324   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.046537   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.046554   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:11:41.173460   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:11:41.173489   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.176142   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.176483   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.176513   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.176659   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.176845   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.176984   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.177110   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.177285   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.177443   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.177458   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:11:41.295471   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:11:41.295501   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:11:41.295523   26946 buildroot.go:174] setting up certificates
	I0930 11:11:41.295535   26946 provision.go:84] configureAuth start
	I0930 11:11:41.295560   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.295824   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:41.298508   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.298844   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.298871   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.299011   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.301187   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.301504   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.301529   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.301674   26946 provision.go:143] copyHostCerts
	I0930 11:11:41.301701   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:11:41.301735   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:11:41.301744   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:11:41.301807   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:11:41.301895   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:11:41.301913   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:11:41.301919   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:11:41.301944   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:11:41.301997   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:11:41.302013   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:11:41.302019   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:11:41.302039   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:11:41.302094   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:11:41.595618   26946 provision.go:177] copyRemoteCerts
	I0930 11:11:41.595675   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:11:41.595700   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.598644   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.599092   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.599122   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.599308   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.599628   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.599809   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.599990   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:41.686253   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:11:41.686348   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:11:41.716396   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:11:41.716470   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:11:41.741350   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:11:41.741426   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:11:41.765879   26946 provision.go:87] duration metric: took 470.33102ms to configureAuth
	I0930 11:11:41.765904   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:11:41.766073   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:11:41.766153   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.768846   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.769139   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.769163   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.769350   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.769573   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.769751   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.769867   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.770004   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.770154   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.770171   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:11:41.997580   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:11:41.997603   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:11:41.997612   26946 main.go:141] libmachine: (ha-033260) Calling .GetURL
	I0930 11:11:41.998809   26946 main.go:141] libmachine: (ha-033260) DBG | Using libvirt version 6000000
	I0930 11:11:42.000992   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.001367   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.001403   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.001552   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:11:42.001574   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:11:42.001580   26946 client.go:171] duration metric: took 24.944976164s to LocalClient.Create
	I0930 11:11:42.001599   26946 start.go:167] duration metric: took 24.945029476s to libmachine.API.Create "ha-033260"
	I0930 11:11:42.001605   26946 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:11:42.001634   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:11:42.001658   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.001903   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:11:42.001928   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.004137   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.004477   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.004506   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.004626   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.004785   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.004929   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.005073   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.088764   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:11:42.093605   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:11:42.093649   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:11:42.093718   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:11:42.093798   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:11:42.093808   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:11:42.093909   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:11:42.104383   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:11:42.133090   26946 start.go:296] duration metric: took 131.471881ms for postStartSetup
	I0930 11:11:42.133135   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:42.133732   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:42.136141   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.136473   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.136492   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.136788   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:11:42.136956   26946 start.go:128] duration metric: took 25.09848122s to createHost
	I0930 11:11:42.136975   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.139440   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.139825   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.139853   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.139989   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.140175   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.140334   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.140446   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.140582   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:42.140793   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:42.140810   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:11:42.250567   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694702.228135172
	
	I0930 11:11:42.250590   26946 fix.go:216] guest clock: 1727694702.228135172
	I0930 11:11:42.250600   26946 fix.go:229] Guest: 2024-09-30 11:11:42.228135172 +0000 UTC Remote: 2024-09-30 11:11:42.136966335 +0000 UTC m=+25.202018114 (delta=91.168837ms)
	I0930 11:11:42.250654   26946 fix.go:200] guest clock delta is within tolerance: 91.168837ms
	I0930 11:11:42.250662   26946 start.go:83] releasing machines lock for "ha-033260", held for 25.21225918s
	I0930 11:11:42.250689   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.250959   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:42.253937   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.254263   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.254291   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.254395   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.254873   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.255071   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.255171   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:11:42.255230   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.255277   26946 ssh_runner.go:195] Run: cat /version.json
	I0930 11:11:42.255305   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.257775   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258072   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258098   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.258117   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258247   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.258399   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.258499   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.258530   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258550   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.258636   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.258725   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.258782   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.258905   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.259023   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.338949   26946 ssh_runner.go:195] Run: systemctl --version
	I0930 11:11:42.367977   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:11:42.529658   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:11:42.535739   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:11:42.535805   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:11:42.553004   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:11:42.553029   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:11:42.553101   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:11:42.571333   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:11:42.586474   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:11:42.586529   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:11:42.600562   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:11:42.614592   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:11:42.724714   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:11:42.863957   26946 docker.go:233] disabling docker service ...
	I0930 11:11:42.864016   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:11:42.878829   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:11:42.892519   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:11:43.031759   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:11:43.156228   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:11:43.171439   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:11:43.190694   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:11:43.190806   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.201572   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:11:43.201660   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.212771   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.224198   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.235643   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:11:43.247521   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.258652   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.276825   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.288336   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:11:43.299367   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:11:43.299422   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:11:43.314057   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:11:43.324403   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:11:43.446606   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:11:43.543986   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:11:43.544064   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:11:43.548794   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:11:43.548857   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:11:43.552827   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:11:43.593000   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:11:43.593096   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:11:43.624593   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:11:43.654845   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:11:43.656217   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:43.658636   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:43.658956   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:43.658982   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:43.659236   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:11:43.663528   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:11:43.677810   26946 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:11:43.677905   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:11:43.677950   26946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:11:43.712140   26946 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 11:11:43.712231   26946 ssh_runner.go:195] Run: which lz4
	I0930 11:11:43.716210   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 11:11:43.716286   26946 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:11:43.720372   26946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:11:43.720397   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 11:11:45.117936   26946 crio.go:462] duration metric: took 1.401668541s to copy over tarball
	I0930 11:11:45.118009   26946 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:11:47.123971   26946 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.00593624s)
	I0930 11:11:47.124002   26946 crio.go:469] duration metric: took 2.006037646s to extract the tarball
	I0930 11:11:47.124011   26946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:11:47.161484   26946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:11:47.208444   26946 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:11:47.208468   26946 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:11:47.208475   26946 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:11:47.208561   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:11:47.208632   26946 ssh_runner.go:195] Run: crio config
	I0930 11:11:47.256652   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:11:47.256671   26946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 11:11:47.256679   26946 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:11:47.256700   26946 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:11:47.256808   26946 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:11:47.256829   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:11:47.256866   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:11:47.273274   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:11:47.273411   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:11:47.273489   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:11:47.284468   26946 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:11:47.284546   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:11:47.295086   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:11:47.313062   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:11:47.330490   26946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:11:47.348148   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0930 11:11:47.364645   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:11:47.368788   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:11:47.381517   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:11:47.516902   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:11:47.535500   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:11:47.535531   26946 certs.go:194] generating shared ca certs ...
	I0930 11:11:47.535554   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.535745   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:11:47.535819   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:11:47.535836   26946 certs.go:256] generating profile certs ...
	I0930 11:11:47.535916   26946 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:11:47.535947   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt with IP's: []
	I0930 11:11:47.718587   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt ...
	I0930 11:11:47.718617   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt: {Name:mkef0c2b538ff6ec90e4096f6b30d2cc62a0498b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.718785   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key ...
	I0930 11:11:47.718795   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key: {Name:mk0bf4d552829907727733b9f23a1e78046426c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.718864   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf
	I0930 11:11:47.718878   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.254]
	I0930 11:11:47.993565   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf ...
	I0930 11:11:47.993602   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf: {Name:mk8d827ffc338aba548bc3df464e9e04ae838b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.993789   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf ...
	I0930 11:11:47.993807   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf: {Name:mka275015927a8ca9f533558d637ec2560f5b41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.993887   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:11:47.993965   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:11:47.994041   26946 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:11:47.994059   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt with IP's: []
	I0930 11:11:48.098988   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt ...
	I0930 11:11:48.099020   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt: {Name:mk7106fd4af523e8a328dae6580fd1ecc34c18b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:48.099178   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key ...
	I0930 11:11:48.099189   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key: {Name:mka3dbe7128ec5d469ec7906155af8e6e7cc2725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:48.099265   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:11:48.099283   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:11:48.099294   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:11:48.099304   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:11:48.099314   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:11:48.099324   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:11:48.099333   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:11:48.099342   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:11:48.099385   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:11:48.099425   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:11:48.099434   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:11:48.099457   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:11:48.099481   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:11:48.099502   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:11:48.099537   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:11:48.099561   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.099574   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.099592   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.100091   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:11:48.126879   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:11:48.153722   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:11:48.179797   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:11:48.205074   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 11:11:48.230272   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:11:48.255030   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:11:48.279850   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:11:48.306723   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:11:48.332995   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:11:48.363646   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:11:48.392223   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:11:48.410336   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:11:48.416506   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:11:48.428642   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.433601   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.433673   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.439817   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:11:48.451918   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:11:48.464282   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.469211   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.469276   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.475319   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:11:48.487558   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:11:48.500151   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.505278   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.505355   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.511924   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:11:48.525201   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:11:48.529960   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:11:48.530014   26946 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:11:48.530081   26946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:11:48.530129   26946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:11:48.568913   26946 cri.go:89] found id: ""
	I0930 11:11:48.568975   26946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:11:48.580292   26946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 11:11:48.593494   26946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 11:11:48.606006   26946 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 11:11:48.606037   26946 kubeadm.go:157] found existing configuration files:
	
	I0930 11:11:48.606079   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 11:11:48.615784   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 11:11:48.615855   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 11:11:48.626018   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 11:11:48.635953   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 11:11:48.636032   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 11:11:48.646292   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 11:11:48.657605   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 11:11:48.657679   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 11:11:48.669154   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 11:11:48.680279   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 11:11:48.680348   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 11:11:48.691798   26946 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 11:11:48.797903   26946 kubeadm.go:310] W0930 11:11:48.782166     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 11:11:48.798931   26946 kubeadm.go:310] W0930 11:11:48.783291     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 11:11:48.907657   26946 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 11:12:00.116285   26946 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 11:12:00.116363   26946 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 11:12:00.116459   26946 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 11:12:00.116597   26946 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 11:12:00.116728   26946 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 11:12:00.116817   26946 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 11:12:00.118253   26946 out.go:235]   - Generating certificates and keys ...
	I0930 11:12:00.118344   26946 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 11:12:00.118441   26946 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 11:12:00.118536   26946 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 11:12:00.118621   26946 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 11:12:00.118710   26946 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 11:12:00.118780   26946 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 11:12:00.118849   26946 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 11:12:00.118971   26946 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-033260 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0930 11:12:00.119022   26946 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 11:12:00.119113   26946 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-033260 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0930 11:12:00.119209   26946 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 11:12:00.119261   26946 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 11:12:00.119300   26946 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 11:12:00.119361   26946 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 11:12:00.119418   26946 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 11:12:00.119463   26946 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 11:12:00.119517   26946 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 11:12:00.119604   26946 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 11:12:00.119657   26946 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 11:12:00.119721   26946 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 11:12:00.119813   26946 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 11:12:00.121972   26946 out.go:235]   - Booting up control plane ...
	I0930 11:12:00.122077   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 11:12:00.122168   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 11:12:00.122257   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 11:12:00.122354   26946 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 11:12:00.122445   26946 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 11:12:00.122493   26946 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 11:12:00.122632   26946 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 11:12:00.122746   26946 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 11:12:00.122807   26946 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002277963s
	I0930 11:12:00.122866   26946 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 11:12:00.122914   26946 kubeadm.go:310] [api-check] The API server is healthy after 5.817139259s
	I0930 11:12:00.123017   26946 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 11:12:00.123126   26946 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 11:12:00.123189   26946 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 11:12:00.123373   26946 kubeadm.go:310] [mark-control-plane] Marking the node ha-033260 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 11:12:00.123455   26946 kubeadm.go:310] [bootstrap-token] Using token: mglnbr.4ysxjyfx6ulvufry
	I0930 11:12:00.124695   26946 out.go:235]   - Configuring RBAC rules ...
	I0930 11:12:00.124816   26946 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 11:12:00.124888   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 11:12:00.125008   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 11:12:00.125123   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 11:12:00.125226   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 11:12:00.125300   26946 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 11:12:00.125399   26946 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 11:12:00.125438   26946 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 11:12:00.125482   26946 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 11:12:00.125488   26946 kubeadm.go:310] 
	I0930 11:12:00.125543   26946 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 11:12:00.125548   26946 kubeadm.go:310] 
	I0930 11:12:00.125627   26946 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 11:12:00.125640   26946 kubeadm.go:310] 
	I0930 11:12:00.125667   26946 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 11:12:00.125722   26946 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 11:12:00.125765   26946 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 11:12:00.125771   26946 kubeadm.go:310] 
	I0930 11:12:00.125822   26946 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 11:12:00.125832   26946 kubeadm.go:310] 
	I0930 11:12:00.125875   26946 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 11:12:00.125882   26946 kubeadm.go:310] 
	I0930 11:12:00.125945   26946 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 11:12:00.126010   26946 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 11:12:00.126068   26946 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 11:12:00.126073   26946 kubeadm.go:310] 
	I0930 11:12:00.126141   26946 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 11:12:00.126212   26946 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 11:12:00.126219   26946 kubeadm.go:310] 
	I0930 11:12:00.126299   26946 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mglnbr.4ysxjyfx6ulvufry \
	I0930 11:12:00.126384   26946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 \
	I0930 11:12:00.126404   26946 kubeadm.go:310] 	--control-plane 
	I0930 11:12:00.126410   26946 kubeadm.go:310] 
	I0930 11:12:00.126493   26946 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 11:12:00.126501   26946 kubeadm.go:310] 
	I0930 11:12:00.126563   26946 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mglnbr.4ysxjyfx6ulvufry \
	I0930 11:12:00.126653   26946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 
	I0930 11:12:00.126666   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:12:00.126671   26946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 11:12:00.128070   26946 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 11:12:00.129234   26946 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 11:12:00.134944   26946 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 11:12:00.134960   26946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 11:12:00.155333   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 11:12:00.530346   26946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 11:12:00.530478   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260 minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=true
	I0930 11:12:00.530486   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:00.762071   26946 ops.go:34] apiserver oom_adj: -16
	I0930 11:12:00.762161   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:01.262836   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:01.762341   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:02.262939   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:02.762594   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.263292   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.762877   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.861166   26946 kubeadm.go:1113] duration metric: took 3.330735229s to wait for elevateKubeSystemPrivileges
	I0930 11:12:03.861207   26946 kubeadm.go:394] duration metric: took 15.331194175s to StartCluster
	I0930 11:12:03.861229   26946 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:03.861306   26946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:03.861899   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:03.862096   26946 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:03.862109   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 11:12:03.862128   26946 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:12:03.862180   26946 addons.go:69] Setting storage-provisioner=true in profile "ha-033260"
	I0930 11:12:03.862192   26946 addons.go:234] Setting addon storage-provisioner=true in "ha-033260"
	I0930 11:12:03.862117   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:12:03.862217   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:03.862220   26946 addons.go:69] Setting default-storageclass=true in profile "ha-033260"
	I0930 11:12:03.862242   26946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-033260"
	I0930 11:12:03.862318   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:03.862546   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.862579   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.862640   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.862674   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.878311   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0930 11:12:03.878524   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
	I0930 11:12:03.878793   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.878956   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.879296   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.879311   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.879437   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.879458   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.879666   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.879878   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.880063   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.880274   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.880317   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.882311   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:03.882615   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:12:03.883117   26946 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 11:12:03.883340   26946 addons.go:234] Setting addon default-storageclass=true in "ha-033260"
	I0930 11:12:03.883377   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:03.883734   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.883774   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.895612   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0930 11:12:03.896182   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.896686   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.896706   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.897041   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.897263   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.899125   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:03.899133   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
	I0930 11:12:03.899601   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.900021   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.900036   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.900378   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.901008   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.901054   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.901205   26946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:12:03.902407   26946 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:03.902428   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 11:12:03.902445   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:03.905497   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.906023   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:03.906045   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.906199   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:03.906396   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:03.906554   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:03.906702   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:03.917103   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0930 11:12:03.917557   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.918124   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.918149   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.918507   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.918675   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.920302   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:03.920506   26946 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:03.920522   26946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 11:12:03.920544   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:03.923151   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.923529   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:03.923552   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.923700   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:03.923867   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:03.923995   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:03.924108   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:03.981471   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 11:12:04.090970   26946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:04.120632   26946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:04.535542   26946 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 11:12:04.535597   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.535614   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.535906   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.535923   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.535937   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.535938   26946 main.go:141] libmachine: (ha-033260) DBG | Closing plugin on server side
	I0930 11:12:04.535945   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.536174   26946 main.go:141] libmachine: (ha-033260) DBG | Closing plugin on server side
	I0930 11:12:04.536192   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.536203   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.536265   26946 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:12:04.536288   26946 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:12:04.536378   26946 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0930 11:12:04.536387   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:04.536394   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:04.536397   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:04.616635   26946 round_trippers.go:574] Response Status: 200 OK in 80 milliseconds
	I0930 11:12:04.617143   26946 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0930 11:12:04.617157   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:04.617164   26946 round_trippers.go:473]     Content-Type: application/json
	I0930 11:12:04.617168   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:04.617171   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:04.644304   26946 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0930 11:12:04.644577   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.644596   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.644880   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.644899   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.839773   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.839805   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.840111   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.840131   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.840140   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.840149   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.840370   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.840384   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.841979   26946 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 11:12:04.843256   26946 addons.go:510] duration metric: took 981.127437ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 11:12:04.843295   26946 start.go:246] waiting for cluster config update ...
	I0930 11:12:04.843309   26946 start.go:255] writing updated cluster config ...
	I0930 11:12:04.844944   26946 out.go:201] 
	I0930 11:12:04.846458   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:04.846524   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:04.848060   26946 out.go:177] * Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	I0930 11:12:04.849158   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:12:04.849179   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:12:04.849280   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:12:04.849291   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:12:04.849355   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:04.849507   26946 start.go:360] acquireMachinesLock for ha-033260-m02: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:12:04.849551   26946 start.go:364] duration metric: took 26.46µs to acquireMachinesLock for "ha-033260-m02"
	I0930 11:12:04.849567   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:04.849642   26946 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0930 11:12:04.851226   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:12:04.851326   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:04.851360   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:04.866966   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0930 11:12:04.867433   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:04.867975   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:04.867995   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:04.868336   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:04.868557   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:04.868710   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:04.868858   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:12:04.868889   26946 client.go:168] LocalClient.Create starting
	I0930 11:12:04.868923   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:12:04.868957   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:12:04.868973   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:12:04.869023   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:12:04.869042   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:12:04.869052   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:12:04.869078   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:12:04.869093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .PreCreateCheck
	I0930 11:12:04.869253   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:04.869711   26946 main.go:141] libmachine: Creating machine...
	I0930 11:12:04.869724   26946 main.go:141] libmachine: (ha-033260-m02) Calling .Create
	I0930 11:12:04.869845   26946 main.go:141] libmachine: (ha-033260-m02) Creating KVM machine...
	I0930 11:12:04.871091   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found existing default KVM network
	I0930 11:12:04.871157   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found existing private KVM network mk-ha-033260
	I0930 11:12:04.871294   26946 main.go:141] libmachine: (ha-033260-m02) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 ...
	I0930 11:12:04.871318   26946 main.go:141] libmachine: (ha-033260-m02) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:12:04.871364   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:04.871284   27323 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:12:04.871439   26946 main.go:141] libmachine: (ha-033260-m02) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:12:05.099309   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.099139   27323 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa...
	I0930 11:12:05.396113   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.395976   27323 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/ha-033260-m02.rawdisk...
	I0930 11:12:05.396137   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Writing magic tar header
	I0930 11:12:05.396150   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Writing SSH key tar header
	I0930 11:12:05.396161   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.396084   27323 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 ...
	I0930 11:12:05.396175   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02
	I0930 11:12:05.396200   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 (perms=drwx------)
	I0930 11:12:05.396209   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:12:05.396245   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:12:05.396258   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:12:05.396269   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:12:05.396285   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:12:05.396302   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:12:05.396315   26946 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:12:05.396331   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:12:05.396348   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:12:05.396365   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:12:05.396376   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:12:05.396390   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home
	I0930 11:12:05.396400   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Skipping /home - not owner
	I0930 11:12:05.397208   26946 main.go:141] libmachine: (ha-033260-m02) define libvirt domain using xml: 
	I0930 11:12:05.397237   26946 main.go:141] libmachine: (ha-033260-m02) <domain type='kvm'>
	I0930 11:12:05.397248   26946 main.go:141] libmachine: (ha-033260-m02)   <name>ha-033260-m02</name>
	I0930 11:12:05.397259   26946 main.go:141] libmachine: (ha-033260-m02)   <memory unit='MiB'>2200</memory>
	I0930 11:12:05.397267   26946 main.go:141] libmachine: (ha-033260-m02)   <vcpu>2</vcpu>
	I0930 11:12:05.397273   26946 main.go:141] libmachine: (ha-033260-m02)   <features>
	I0930 11:12:05.397282   26946 main.go:141] libmachine: (ha-033260-m02)     <acpi/>
	I0930 11:12:05.397289   26946 main.go:141] libmachine: (ha-033260-m02)     <apic/>
	I0930 11:12:05.397297   26946 main.go:141] libmachine: (ha-033260-m02)     <pae/>
	I0930 11:12:05.397306   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397314   26946 main.go:141] libmachine: (ha-033260-m02)   </features>
	I0930 11:12:05.397321   26946 main.go:141] libmachine: (ha-033260-m02)   <cpu mode='host-passthrough'>
	I0930 11:12:05.397329   26946 main.go:141] libmachine: (ha-033260-m02)   
	I0930 11:12:05.397335   26946 main.go:141] libmachine: (ha-033260-m02)   </cpu>
	I0930 11:12:05.397359   26946 main.go:141] libmachine: (ha-033260-m02)   <os>
	I0930 11:12:05.397379   26946 main.go:141] libmachine: (ha-033260-m02)     <type>hvm</type>
	I0930 11:12:05.397384   26946 main.go:141] libmachine: (ha-033260-m02)     <boot dev='cdrom'/>
	I0930 11:12:05.397391   26946 main.go:141] libmachine: (ha-033260-m02)     <boot dev='hd'/>
	I0930 11:12:05.397407   26946 main.go:141] libmachine: (ha-033260-m02)     <bootmenu enable='no'/>
	I0930 11:12:05.397419   26946 main.go:141] libmachine: (ha-033260-m02)   </os>
	I0930 11:12:05.397427   26946 main.go:141] libmachine: (ha-033260-m02)   <devices>
	I0930 11:12:05.397438   26946 main.go:141] libmachine: (ha-033260-m02)     <disk type='file' device='cdrom'>
	I0930 11:12:05.397450   26946 main.go:141] libmachine: (ha-033260-m02)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/boot2docker.iso'/>
	I0930 11:12:05.397461   26946 main.go:141] libmachine: (ha-033260-m02)       <target dev='hdc' bus='scsi'/>
	I0930 11:12:05.397468   26946 main.go:141] libmachine: (ha-033260-m02)       <readonly/>
	I0930 11:12:05.397480   26946 main.go:141] libmachine: (ha-033260-m02)     </disk>
	I0930 11:12:05.397492   26946 main.go:141] libmachine: (ha-033260-m02)     <disk type='file' device='disk'>
	I0930 11:12:05.397501   26946 main.go:141] libmachine: (ha-033260-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:12:05.397518   26946 main.go:141] libmachine: (ha-033260-m02)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/ha-033260-m02.rawdisk'/>
	I0930 11:12:05.397528   26946 main.go:141] libmachine: (ha-033260-m02)       <target dev='hda' bus='virtio'/>
	I0930 11:12:05.397538   26946 main.go:141] libmachine: (ha-033260-m02)     </disk>
	I0930 11:12:05.397548   26946 main.go:141] libmachine: (ha-033260-m02)     <interface type='network'>
	I0930 11:12:05.397565   26946 main.go:141] libmachine: (ha-033260-m02)       <source network='mk-ha-033260'/>
	I0930 11:12:05.397579   26946 main.go:141] libmachine: (ha-033260-m02)       <model type='virtio'/>
	I0930 11:12:05.397590   26946 main.go:141] libmachine: (ha-033260-m02)     </interface>
	I0930 11:12:05.397605   26946 main.go:141] libmachine: (ha-033260-m02)     <interface type='network'>
	I0930 11:12:05.397627   26946 main.go:141] libmachine: (ha-033260-m02)       <source network='default'/>
	I0930 11:12:05.397641   26946 main.go:141] libmachine: (ha-033260-m02)       <model type='virtio'/>
	I0930 11:12:05.397651   26946 main.go:141] libmachine: (ha-033260-m02)     </interface>
	I0930 11:12:05.397663   26946 main.go:141] libmachine: (ha-033260-m02)     <serial type='pty'>
	I0930 11:12:05.397672   26946 main.go:141] libmachine: (ha-033260-m02)       <target port='0'/>
	I0930 11:12:05.397683   26946 main.go:141] libmachine: (ha-033260-m02)     </serial>
	I0930 11:12:05.397693   26946 main.go:141] libmachine: (ha-033260-m02)     <console type='pty'>
	I0930 11:12:05.397702   26946 main.go:141] libmachine: (ha-033260-m02)       <target type='serial' port='0'/>
	I0930 11:12:05.397716   26946 main.go:141] libmachine: (ha-033260-m02)     </console>
	I0930 11:12:05.397728   26946 main.go:141] libmachine: (ha-033260-m02)     <rng model='virtio'>
	I0930 11:12:05.397739   26946 main.go:141] libmachine: (ha-033260-m02)       <backend model='random'>/dev/random</backend>
	I0930 11:12:05.397750   26946 main.go:141] libmachine: (ha-033260-m02)     </rng>
	I0930 11:12:05.397758   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397766   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397771   26946 main.go:141] libmachine: (ha-033260-m02)   </devices>
	I0930 11:12:05.397781   26946 main.go:141] libmachine: (ha-033260-m02) </domain>
	I0930 11:12:05.397794   26946 main.go:141] libmachine: (ha-033260-m02) 
	I0930 11:12:05.404924   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:91:42:82 in network default
	I0930 11:12:05.405500   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:05.405515   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring networks are active...
	I0930 11:12:05.406422   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring network default is active
	I0930 11:12:05.406717   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring network mk-ha-033260 is active
	I0930 11:12:05.407099   26946 main.go:141] libmachine: (ha-033260-m02) Getting domain xml...
	I0930 11:12:05.407766   26946 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:12:06.665629   26946 main.go:141] libmachine: (ha-033260-m02) Waiting to get IP...
	I0930 11:12:06.666463   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:06.666923   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:06.666983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:06.666914   27323 retry.go:31] will retry after 236.292128ms: waiting for machine to come up
	I0930 11:12:06.904458   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:06.904973   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:06.905008   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:06.904946   27323 retry.go:31] will retry after 373.72215ms: waiting for machine to come up
	I0930 11:12:07.280653   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:07.281148   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:07.281167   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:07.281127   27323 retry.go:31] will retry after 417.615707ms: waiting for machine to come up
	I0930 11:12:07.700723   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:07.701173   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:07.701199   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:07.701130   27323 retry.go:31] will retry after 495.480397ms: waiting for machine to come up
	I0930 11:12:08.198698   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:08.199207   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:08.199236   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:08.199183   27323 retry.go:31] will retry after 541.395524ms: waiting for machine to come up
	I0930 11:12:08.742190   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:08.742786   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:08.742812   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:08.742737   27323 retry.go:31] will retry after 711.22134ms: waiting for machine to come up
	I0930 11:12:09.455685   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:09.456147   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:09.456172   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:09.456119   27323 retry.go:31] will retry after 1.042420332s: waiting for machine to come up
	I0930 11:12:10.499804   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:10.500316   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:10.500353   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:10.500299   27323 retry.go:31] will retry after 1.048379902s: waiting for machine to come up
	I0930 11:12:11.550177   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:11.550587   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:11.550616   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:11.550525   27323 retry.go:31] will retry after 1.84570983s: waiting for machine to come up
	I0930 11:12:13.397532   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:13.398027   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:13.398052   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:13.397980   27323 retry.go:31] will retry after 1.566549945s: waiting for machine to come up
	I0930 11:12:14.966467   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:14.966938   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:14.966983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:14.966914   27323 retry.go:31] will retry after 1.814424901s: waiting for machine to come up
	I0930 11:12:16.783827   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:16.784216   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:16.784247   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:16.784177   27323 retry.go:31] will retry after 3.594354669s: waiting for machine to come up
	I0930 11:12:20.380537   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:20.380935   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:20.380960   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:20.380904   27323 retry.go:31] will retry after 3.199139157s: waiting for machine to come up
	I0930 11:12:23.582795   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:23.583206   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:23.583227   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:23.583181   27323 retry.go:31] will retry after 5.054668279s: waiting for machine to come up
	I0930 11:12:28.639867   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.640504   26946 main.go:141] libmachine: (ha-033260-m02) Found IP for machine: 192.168.39.3
	I0930 11:12:28.640526   26946 main.go:141] libmachine: (ha-033260-m02) Reserving static IP address...
	I0930 11:12:28.640539   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.641001   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"} in network mk-ha-033260
	I0930 11:12:28.722236   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Getting to WaitForSSH function...
	I0930 11:12:28.722267   26946 main.go:141] libmachine: (ha-033260-m02) Reserved static IP address: 192.168.39.3
	I0930 11:12:28.722280   26946 main.go:141] libmachine: (ha-033260-m02) Waiting for SSH to be available...
	I0930 11:12:28.724853   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.725241   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.725265   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.725515   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH client type: external
	I0930 11:12:28.725540   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa (-rw-------)
	I0930 11:12:28.725576   26946 main.go:141] libmachine: (ha-033260-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:12:28.725598   26946 main.go:141] libmachine: (ha-033260-m02) DBG | About to run SSH command:
	I0930 11:12:28.725610   26946 main.go:141] libmachine: (ha-033260-m02) DBG | exit 0
	I0930 11:12:28.854399   26946 main.go:141] libmachine: (ha-033260-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 11:12:28.854625   26946 main.go:141] libmachine: (ha-033260-m02) KVM machine creation complete!
	I0930 11:12:28.855272   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:28.855866   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:28.856047   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:28.856170   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:12:28.856182   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:12:28.857578   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:12:28.857593   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:12:28.857600   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:12:28.857606   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:28.859889   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.860246   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.860279   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.860438   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:28.860622   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.860773   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.860913   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:28.861114   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:28.861325   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:28.861337   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:12:28.973157   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:12:28.973184   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:12:28.973195   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:28.976106   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.976500   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.976531   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.976798   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:28.977021   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.977185   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.977339   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:28.977493   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:28.977714   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:28.977727   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:12:29.086855   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:12:29.086927   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:12:29.086937   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:12:29.086951   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.087245   26946 buildroot.go:166] provisioning hostname "ha-033260-m02"
	I0930 11:12:29.087269   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.087463   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.090156   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.090525   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.090551   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.090676   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.090846   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.090986   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.091115   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.091289   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.091467   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.091479   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m02 && echo "ha-033260-m02" | sudo tee /etc/hostname
	I0930 11:12:29.220174   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m02
	
	I0930 11:12:29.220204   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.223091   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.223537   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.223567   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.223724   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.223905   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.224048   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.224217   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.224385   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.224590   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.224614   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:12:29.343733   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:12:29.343767   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:12:29.343787   26946 buildroot.go:174] setting up certificates
	I0930 11:12:29.343798   26946 provision.go:84] configureAuth start
	I0930 11:12:29.343811   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.344093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:29.346631   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.346930   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.346956   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.347096   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.349248   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.349664   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.349689   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.349858   26946 provision.go:143] copyHostCerts
	I0930 11:12:29.349889   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:12:29.349936   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:12:29.349948   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:12:29.350055   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:12:29.350156   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:12:29.350176   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:12:29.350181   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:12:29.350207   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:12:29.350254   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:12:29.350271   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:12:29.350277   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:12:29.350298   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:12:29.350347   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m02 san=[127.0.0.1 192.168.39.3 ha-033260-m02 localhost minikube]
	I0930 11:12:29.533329   26946 provision.go:177] copyRemoteCerts
	I0930 11:12:29.533387   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:12:29.533409   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.535946   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.536287   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.536327   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.536541   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.536745   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.536906   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.537054   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:29.625264   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:12:29.625353   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:12:29.651589   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:12:29.651644   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:12:29.677526   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:12:29.677634   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:12:29.708210   26946 provision.go:87] duration metric: took 364.395657ms to configureAuth
	I0930 11:12:29.708246   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:12:29.708446   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:29.708540   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.711111   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.711545   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.711578   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.711743   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.711914   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.712073   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.712191   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.712381   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.712587   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.712611   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:12:29.956548   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:12:29.956576   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:12:29.956585   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetURL
	I0930 11:12:29.957861   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using libvirt version 6000000
	I0930 11:12:29.959943   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.960349   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.960376   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.960589   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:12:29.960605   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:12:29.960611   26946 client.go:171] duration metric: took 25.091713434s to LocalClient.Create
	I0930 11:12:29.960635   26946 start.go:167] duration metric: took 25.091779085s to libmachine.API.Create "ha-033260"
	I0930 11:12:29.960649   26946 start.go:293] postStartSetup for "ha-033260-m02" (driver="kvm2")
	I0930 11:12:29.960663   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:12:29.960682   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:29.960894   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:12:29.960921   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.962943   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.963366   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.963390   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.963547   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.963747   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.963887   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.963995   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.049684   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:12:30.054345   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:12:30.054373   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:12:30.054430   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:12:30.054507   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:12:30.054516   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:12:30.054592   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:12:30.064685   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:12:30.090069   26946 start.go:296] duration metric: took 129.405576ms for postStartSetup
	I0930 11:12:30.090127   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:30.090769   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:30.093475   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.093805   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.093836   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.094011   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:30.094269   26946 start.go:128] duration metric: took 25.244614564s to createHost
	I0930 11:12:30.094293   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:30.096188   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.096490   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.096524   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.096656   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.096825   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.096963   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.097093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.097253   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:30.097426   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:30.097439   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:12:30.206856   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694750.184612585
	
	I0930 11:12:30.206885   26946 fix.go:216] guest clock: 1727694750.184612585
	I0930 11:12:30.206895   26946 fix.go:229] Guest: 2024-09-30 11:12:30.184612585 +0000 UTC Remote: 2024-09-30 11:12:30.094281951 +0000 UTC m=+73.159334041 (delta=90.330634ms)
	I0930 11:12:30.206915   26946 fix.go:200] guest clock delta is within tolerance: 90.330634ms
	I0930 11:12:30.206922   26946 start.go:83] releasing machines lock for "ha-033260-m02", held for 25.357361614s
	I0930 11:12:30.206944   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.207256   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:30.209590   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.209935   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.209964   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.212335   26946 out.go:177] * Found network options:
	I0930 11:12:30.213673   26946 out.go:177]   - NO_PROXY=192.168.39.249
	W0930 11:12:30.215021   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:12:30.215056   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215673   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215843   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215938   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:12:30.215976   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	W0930 11:12:30.215983   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:12:30.216054   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:12:30.216075   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:30.218771   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.218983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219125   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.219147   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219360   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.219434   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.219459   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219516   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.219662   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.219670   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.219831   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.219846   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.219963   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.220088   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.454192   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:12:30.462288   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:12:30.462348   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:12:30.479853   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:12:30.479878   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:12:30.479941   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:12:30.496617   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:12:30.512078   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:12:30.512142   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:12:30.526557   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:12:30.541136   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:12:30.655590   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:12:30.814049   26946 docker.go:233] disabling docker service ...
	I0930 11:12:30.814123   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:12:30.829972   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:12:30.844068   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:12:30.969831   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:12:31.096443   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:12:31.111612   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:12:31.131553   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:12:31.131621   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.143596   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:12:31.143658   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.156112   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.167422   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.179559   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:12:31.192037   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.203507   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.222188   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.234115   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:12:31.245344   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:12:31.245401   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:12:31.259589   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:12:31.269907   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:31.388443   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:12:31.482864   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:12:31.482933   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:12:31.487957   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:12:31.488026   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:12:31.492173   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:12:31.530740   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:12:31.530821   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:12:31.560435   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:12:31.592377   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:12:31.593888   26946 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:12:31.595254   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:31.598165   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:31.598504   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:31.598535   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:31.598710   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:12:31.603081   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:31.616231   26946 mustload.go:65] Loading cluster: ha-033260
	I0930 11:12:31.616424   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:31.616676   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:31.616714   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:31.631793   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
	I0930 11:12:31.632254   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:31.632734   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:31.632757   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:31.633092   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:31.633272   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:31.634860   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:31.635130   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:31.635170   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:31.649687   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0930 11:12:31.650053   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:31.650497   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:31.650520   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:31.650803   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:31.650951   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:31.651118   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.3
	I0930 11:12:31.651130   26946 certs.go:194] generating shared ca certs ...
	I0930 11:12:31.651148   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.651260   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:12:31.651304   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:12:31.651313   26946 certs.go:256] generating profile certs ...
	I0930 11:12:31.651410   26946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:12:31.651435   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87
	I0930 11:12:31.651449   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.254]
	I0930 11:12:31.912914   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 ...
	I0930 11:12:31.912947   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87: {Name:mk5789d867ee86689334498533835b6baa525e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.913110   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87 ...
	I0930 11:12:31.913123   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87: {Name:mkcd56431095ebd059864bd581ed7c141670cf4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.913195   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:12:31.913335   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:12:31.913463   26946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:12:31.913478   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:12:31.913490   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:12:31.913500   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:12:31.913510   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:12:31.913520   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:12:31.913529   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:12:31.913539   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:12:31.913551   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:12:31.913591   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:12:31.913648   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:12:31.913661   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:12:31.913690   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:12:31.913712   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:12:31.913735   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:12:31.913780   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:12:31.913806   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:31.913824   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:12:31.913836   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:12:31.913865   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:31.917099   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:31.917453   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:31.917482   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:31.917675   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:31.917892   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:31.918041   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:31.918169   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:31.994019   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:12:31.999621   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:12:32.012410   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:12:32.017661   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:12:32.028991   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:12:32.034566   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:12:32.047607   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:12:32.052664   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:12:32.069473   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:12:32.074705   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:12:32.086100   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:12:32.090557   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:12:32.103048   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:12:32.132371   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:12:32.159806   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:12:32.185933   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:12:32.210826   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 11:12:32.236862   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:12:32.262441   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:12:32.289773   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:12:32.318287   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:12:32.347371   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:12:32.372327   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:12:32.397781   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:12:32.415260   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:12:32.433137   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:12:32.450661   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:12:32.467444   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:12:32.484994   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:12:32.503412   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:12:32.522919   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:12:32.529057   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:12:32.541643   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.546691   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.546753   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.553211   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:12:32.565054   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:12:32.576855   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.581764   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.581818   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.588983   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:12:32.602082   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:12:32.613340   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.617722   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.617775   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.623445   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:12:32.635275   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:12:32.639755   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:12:32.639812   26946 kubeadm.go:934] updating node {m02 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 11:12:32.639905   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:12:32.639928   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:12:32.639958   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:12:32.657152   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:12:32.657231   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:12:32.657301   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:12:32.669072   26946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 11:12:32.669126   26946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 11:12:32.681078   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 11:12:32.681102   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:12:32.681147   26946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0930 11:12:32.681159   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:12:32.681202   26946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0930 11:12:32.685896   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 11:12:32.685930   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 11:12:33.355089   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:12:33.355169   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:12:33.360551   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 11:12:33.360593   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 11:12:33.497331   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:12:33.536292   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:12:33.536381   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:12:33.556993   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 11:12:33.557034   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 11:12:33.963212   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:12:33.973956   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0930 11:12:33.992407   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:12:34.010174   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:12:34.027647   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:12:34.031715   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:34.045021   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:34.164493   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:34.181854   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:34.182385   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:34.182436   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:34.197448   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0930 11:12:34.197925   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:34.198415   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:34.198439   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:34.198777   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:34.199019   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:34.199179   26946 start.go:317] joinCluster: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:12:34.199281   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 11:12:34.199296   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:34.202318   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:34.202754   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:34.202783   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:34.202947   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:34.203150   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:34.203332   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:34.203477   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:34.356774   26946 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:34.356813   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hn6im1.2otceyiojx5fmqqd --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m02 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443"
	I0930 11:12:56.361665   26946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hn6im1.2otceyiojx5fmqqd --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m02 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443": (22.004830324s)
	I0930 11:12:56.361703   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 11:12:57.091049   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260-m02 minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=false
	I0930 11:12:57.252660   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-033260-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 11:12:57.383009   26946 start.go:319] duration metric: took 23.183825523s to joinCluster
	I0930 11:12:57.383083   26946 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:57.383372   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:57.384696   26946 out.go:177] * Verifying Kubernetes components...
	I0930 11:12:57.385781   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:57.652948   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:57.700673   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:57.700909   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:12:57.700967   26946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:12:57.701166   26946 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:12:57.701263   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:57.701272   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:57.701283   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:57.701288   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:57.710787   26946 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0930 11:12:58.201703   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:58.201723   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:58.201733   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:58.201738   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:58.218761   26946 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0930 11:12:58.701415   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:58.701436   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:58.701444   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:58.701447   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:58.707425   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:12:59.202375   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:59.202398   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:59.202410   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:59.202416   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:59.206657   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:12:59.701590   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:59.701611   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:59.701635   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:59.701642   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:59.706264   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:12:59.707024   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:00.201877   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:00.201901   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:00.201917   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:00.201924   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:00.205419   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:00.701357   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:00.701378   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:00.701386   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:00.701391   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:00.706252   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:01.202282   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:01.202307   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:01.202319   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:01.202325   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:01.206013   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:01.701738   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:01.701760   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:01.701768   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:01.701773   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:01.705302   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:02.202004   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:02.202030   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:02.202043   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:02.202051   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:02.205535   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:02.206136   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:02.701406   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:02.701427   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:02.701436   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:02.701440   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:02.704929   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:03.202160   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:03.202189   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:03.202198   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:03.202204   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:03.205838   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:03.701797   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:03.701821   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:03.701832   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:03.701841   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:03.706107   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:04.201592   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:04.201623   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:04.201634   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:04.201641   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:04.204858   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:04.701789   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:04.701812   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:04.701825   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:04.701831   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:04.710541   26946 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:13:04.711317   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:05.202211   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:05.202237   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:05.202248   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:05.202255   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:05.206000   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:05.702240   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:05.702263   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:05.702272   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:05.702276   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:05.713473   26946 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0930 11:13:06.201370   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:06.201398   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:06.201412   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:06.201421   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:06.205062   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:06.702136   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:06.702157   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:06.702170   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:06.702178   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:06.707226   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:07.201911   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:07.201933   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:07.201941   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:07.201947   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:07.205398   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:07.206056   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:07.702203   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:07.702228   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:07.702236   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:07.702240   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:07.705652   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:08.201364   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:08.201385   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:08.201393   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:08.201397   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:08.204682   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:08.701564   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:08.701585   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:08.701593   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:08.701597   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:08.704941   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:09.201826   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:09.201874   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:09.201887   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:09.201892   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:09.205730   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:09.206265   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:09.701548   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:09.701576   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:09.701584   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:09.701588   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:09.704970   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:10.202351   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:10.202382   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:10.202393   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:10.202402   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:10.205886   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:10.701694   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:10.701717   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:10.701725   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:10.701729   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:10.705252   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:11.202235   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:11.202256   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:11.202264   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:11.202267   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:11.205904   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:11.206456   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:11.701817   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:11.701840   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:11.701848   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:11.701852   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:11.705418   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:12.202233   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:12.202257   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:12.202267   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:12.202273   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:12.206552   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:12.701910   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:12.701932   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:12.701940   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:12.701944   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:12.705423   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.201690   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:13.201715   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:13.201727   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:13.201733   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:13.205360   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.701378   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:13.701402   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:13.701410   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:13.701416   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:13.704921   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.705712   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:14.202280   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.202303   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.202313   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.202317   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.206153   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.701500   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.701536   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.701545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.701549   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.705110   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.705891   26946 node_ready.go:49] node "ha-033260-m02" has status "Ready":"True"
	I0930 11:13:14.705919   26946 node_ready.go:38] duration metric: took 17.004728232s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:13:14.705930   26946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:13:14.706003   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:14.706012   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.706019   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.706027   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.710637   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:14.717034   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.717112   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:13:14.717120   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.717127   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.717132   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.720167   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.720847   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.720863   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.720870   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.720874   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.723869   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:13:14.724515   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.724535   26946 pod_ready.go:82] duration metric: took 7.4758ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.724545   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.724613   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:13:14.724621   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.724628   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.724634   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.727903   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.728724   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.728741   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.728751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.728757   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.731653   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:13:14.732553   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.732574   26946 pod_ready.go:82] duration metric: took 8.020759ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.732586   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.732653   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:13:14.732664   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.732674   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.732682   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.735972   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.736968   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.736990   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.737001   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.737006   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.742593   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:14.743126   26946 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.743157   26946 pod_ready.go:82] duration metric: took 10.560613ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.743170   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.743261   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:13:14.743274   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.743284   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.743295   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.746988   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.747647   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.747666   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.747678   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.747685   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.752616   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:14.753409   26946 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.753424   26946 pod_ready.go:82] duration metric: took 10.242469ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.753437   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.901974   26946 request.go:632] Waited for 148.458979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:13:14.902036   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:13:14.902043   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.902055   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.902060   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.905987   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.101905   26946 request.go:632] Waited for 195.35281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.101994   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.102002   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.102014   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.102020   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.106060   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:15.106613   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.106631   26946 pod_ready.go:82] duration metric: took 353.188275ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.106640   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.301775   26946 request.go:632] Waited for 195.071866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:13:15.301852   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:13:15.301859   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.301869   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.301877   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.305432   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.502470   26946 request.go:632] Waited for 196.425957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:15.502545   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:15.502550   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.502559   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.502564   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.506368   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.506795   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.506815   26946 pod_ready.go:82] duration metric: took 400.168693ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.506824   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.702050   26946 request.go:632] Waited for 195.162388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:13:15.702133   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:13:15.702141   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.702152   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.702163   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.705891   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.901957   26946 request.go:632] Waited for 195.415244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.902015   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.902032   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.902045   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.902050   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.905760   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.906550   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.906568   26946 pod_ready.go:82] duration metric: took 399.738814ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.906577   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.101960   26946 request.go:632] Waited for 195.295618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:13:16.102015   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:13:16.102020   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.102027   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.102034   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.105657   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.301949   26946 request.go:632] Waited for 195.400353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.302010   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.302015   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.302022   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.302028   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.306149   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:16.306664   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:16.306684   26946 pod_ready.go:82] duration metric: took 400.100909ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.306693   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.501852   26946 request.go:632] Waited for 195.093896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:13:16.501929   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:13:16.501936   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.501944   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.501948   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.505624   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.702111   26946 request.go:632] Waited for 195.755005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.702172   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.702201   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.702232   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.702242   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.706191   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.706772   26946 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:16.706793   26946 pod_ready.go:82] duration metric: took 400.093034ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.706806   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.901822   26946 request.go:632] Waited for 194.939903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:13:16.901874   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:13:16.901878   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.901886   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.901890   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.905939   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:17.102468   26946 request.go:632] Waited for 195.869654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.102551   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.102559   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.102570   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.102576   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.105889   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.106573   26946 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.106594   26946 pod_ready.go:82] duration metric: took 399.778126ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.106605   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.301593   26946 request.go:632] Waited for 194.913576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:13:17.301653   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:13:17.301658   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.301671   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.301678   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.305178   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.502249   26946 request.go:632] Waited for 196.387698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.502326   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.502350   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.502358   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.502362   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.505833   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.506907   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.506935   26946 pod_ready.go:82] duration metric: took 400.319251ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.506948   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.701919   26946 request.go:632] Waited for 194.9063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:13:17.701999   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:13:17.702006   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.702017   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.702028   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.705520   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.902402   26946 request.go:632] Waited for 196.207639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:17.902477   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:17.902485   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.902500   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.902526   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.906656   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:17.907109   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.907128   26946 pod_ready.go:82] duration metric: took 400.172408ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.907142   26946 pod_ready.go:39] duration metric: took 3.201195785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:13:17.907159   26946 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:13:17.907218   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:13:17.923202   26946 api_server.go:72] duration metric: took 20.540084285s to wait for apiserver process to appear ...
	I0930 11:13:17.923232   26946 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:13:17.923251   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:13:17.929517   26946 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:13:17.929596   26946 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:13:17.929602   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.929631   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.929636   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.930581   26946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:13:17.930807   26946 api_server.go:141] control plane version: v1.31.1
	I0930 11:13:17.930834   26946 api_server.go:131] duration metric: took 7.593991ms to wait for apiserver health ...
	I0930 11:13:17.930843   26946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:13:18.102359   26946 request.go:632] Waited for 171.419304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.102425   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.102433   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.102442   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.102449   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.107679   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:18.114591   26946 system_pods.go:59] 17 kube-system pods found
	I0930 11:13:18.114717   26946 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:13:18.114749   26946 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:13:18.114780   26946 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:13:18.114803   26946 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:13:18.114826   26946 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:13:18.114841   26946 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:13:18.114876   26946 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:13:18.114899   26946 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:13:18.114915   26946 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:13:18.114935   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:13:18.114950   26946 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:13:18.114975   26946 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:13:18.114997   26946 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:13:18.115011   26946 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:13:18.115025   26946 system_pods.go:61] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:13:18.115059   26946 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:13:18.115132   26946 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:13:18.115146   26946 system_pods.go:74] duration metric: took 184.295086ms to wait for pod list to return data ...
	I0930 11:13:18.115155   26946 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:13:18.301606   26946 request.go:632] Waited for 186.324564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:13:18.301691   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:13:18.301697   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.301704   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.301708   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.305792   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.306031   26946 default_sa.go:45] found service account: "default"
	I0930 11:13:18.306053   26946 default_sa.go:55] duration metric: took 190.887438ms for default service account to be created ...
	I0930 11:13:18.306064   26946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:13:18.502520   26946 request.go:632] Waited for 196.381212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.502574   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.502580   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.502589   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.502594   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.507606   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.513786   26946 system_pods.go:86] 17 kube-system pods found
	I0930 11:13:18.513814   26946 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:13:18.513820   26946 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:13:18.513824   26946 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:13:18.513828   26946 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:13:18.513832   26946 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:13:18.513835   26946 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:13:18.513838   26946 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:13:18.513842   26946 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:13:18.513845   26946 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:13:18.513849   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:13:18.513852   26946 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:13:18.513855   26946 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:13:18.513858   26946 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:13:18.513864   26946 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:13:18.513868   26946 system_pods.go:89] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:13:18.513871   26946 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:13:18.513874   26946 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:13:18.513883   26946 system_pods.go:126] duration metric: took 207.809961ms to wait for k8s-apps to be running ...
	I0930 11:13:18.513889   26946 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:13:18.513933   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:13:18.530491   26946 system_svc.go:56] duration metric: took 16.594303ms WaitForService to wait for kubelet
	I0930 11:13:18.530520   26946 kubeadm.go:582] duration metric: took 21.147406438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:13:18.530536   26946 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:13:18.701935   26946 request.go:632] Waited for 171.311845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:13:18.701998   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:13:18.702004   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.702013   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.702020   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.706454   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.707258   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:13:18.707286   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:13:18.707302   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:13:18.707309   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:13:18.707315   26946 node_conditions.go:105] duration metric: took 176.773141ms to run NodePressure ...
	I0930 11:13:18.707329   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:13:18.707365   26946 start.go:255] writing updated cluster config ...
	I0930 11:13:18.709744   26946 out.go:201] 
	I0930 11:13:18.711365   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:18.711455   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:18.713157   26946 out.go:177] * Starting "ha-033260-m03" control-plane node in "ha-033260" cluster
	I0930 11:13:18.714611   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:13:18.714636   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:13:18.714744   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:13:18.714757   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:13:18.714852   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:18.715040   26946 start.go:360] acquireMachinesLock for ha-033260-m03: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:13:18.715084   26946 start.go:364] duration metric: took 25.338µs to acquireMachinesLock for "ha-033260-m03"
	I0930 11:13:18.715101   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:13:18.715188   26946 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0930 11:13:18.716794   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:13:18.716894   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:18.716928   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:18.732600   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42281
	I0930 11:13:18.733109   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:18.733561   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:18.733575   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:18.733910   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:18.734089   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:18.734238   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:18.734421   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:13:18.734451   26946 client.go:168] LocalClient.Create starting
	I0930 11:13:18.734489   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:13:18.734529   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:13:18.734544   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:13:18.734600   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:13:18.734619   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:13:18.734631   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:13:18.734648   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:13:18.734656   26946 main.go:141] libmachine: (ha-033260-m03) Calling .PreCreateCheck
	I0930 11:13:18.734797   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:18.735196   26946 main.go:141] libmachine: Creating machine...
	I0930 11:13:18.735209   26946 main.go:141] libmachine: (ha-033260-m03) Calling .Create
	I0930 11:13:18.735336   26946 main.go:141] libmachine: (ha-033260-m03) Creating KVM machine...
	I0930 11:13:18.736643   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found existing default KVM network
	I0930 11:13:18.736820   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found existing private KVM network mk-ha-033260
	I0930 11:13:18.736982   26946 main.go:141] libmachine: (ha-033260-m03) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 ...
	I0930 11:13:18.737011   26946 main.go:141] libmachine: (ha-033260-m03) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:13:18.737118   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:18.736992   27716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:13:18.737204   26946 main.go:141] libmachine: (ha-033260-m03) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:13:18.965830   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:18.965684   27716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa...
	I0930 11:13:19.182387   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:19.182221   27716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/ha-033260-m03.rawdisk...
	I0930 11:13:19.182427   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Writing magic tar header
	I0930 11:13:19.182442   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Writing SSH key tar header
	I0930 11:13:19.182454   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:19.182378   27716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 ...
	I0930 11:13:19.182548   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03
	I0930 11:13:19.182570   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:13:19.182578   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 (perms=drwx------)
	I0930 11:13:19.182587   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:13:19.182596   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:13:19.182610   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:13:19.182620   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:13:19.182634   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:13:19.182647   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:13:19.182661   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:13:19.182678   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:13:19.182687   26946 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:13:19.182699   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:13:19.182796   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home
	I0930 11:13:19.182820   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Skipping /home - not owner
	I0930 11:13:19.183716   26946 main.go:141] libmachine: (ha-033260-m03) define libvirt domain using xml: 
	I0930 11:13:19.183740   26946 main.go:141] libmachine: (ha-033260-m03) <domain type='kvm'>
	I0930 11:13:19.183766   26946 main.go:141] libmachine: (ha-033260-m03)   <name>ha-033260-m03</name>
	I0930 11:13:19.183787   26946 main.go:141] libmachine: (ha-033260-m03)   <memory unit='MiB'>2200</memory>
	I0930 11:13:19.183800   26946 main.go:141] libmachine: (ha-033260-m03)   <vcpu>2</vcpu>
	I0930 11:13:19.183806   26946 main.go:141] libmachine: (ha-033260-m03)   <features>
	I0930 11:13:19.183817   26946 main.go:141] libmachine: (ha-033260-m03)     <acpi/>
	I0930 11:13:19.183827   26946 main.go:141] libmachine: (ha-033260-m03)     <apic/>
	I0930 11:13:19.183836   26946 main.go:141] libmachine: (ha-033260-m03)     <pae/>
	I0930 11:13:19.183845   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.183853   26946 main.go:141] libmachine: (ha-033260-m03)   </features>
	I0930 11:13:19.183861   26946 main.go:141] libmachine: (ha-033260-m03)   <cpu mode='host-passthrough'>
	I0930 11:13:19.183868   26946 main.go:141] libmachine: (ha-033260-m03)   
	I0930 11:13:19.183881   26946 main.go:141] libmachine: (ha-033260-m03)   </cpu>
	I0930 11:13:19.183892   26946 main.go:141] libmachine: (ha-033260-m03)   <os>
	I0930 11:13:19.183902   26946 main.go:141] libmachine: (ha-033260-m03)     <type>hvm</type>
	I0930 11:13:19.183911   26946 main.go:141] libmachine: (ha-033260-m03)     <boot dev='cdrom'/>
	I0930 11:13:19.183924   26946 main.go:141] libmachine: (ha-033260-m03)     <boot dev='hd'/>
	I0930 11:13:19.183936   26946 main.go:141] libmachine: (ha-033260-m03)     <bootmenu enable='no'/>
	I0930 11:13:19.183942   26946 main.go:141] libmachine: (ha-033260-m03)   </os>
	I0930 11:13:19.183951   26946 main.go:141] libmachine: (ha-033260-m03)   <devices>
	I0930 11:13:19.183961   26946 main.go:141] libmachine: (ha-033260-m03)     <disk type='file' device='cdrom'>
	I0930 11:13:19.183975   26946 main.go:141] libmachine: (ha-033260-m03)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/boot2docker.iso'/>
	I0930 11:13:19.183985   26946 main.go:141] libmachine: (ha-033260-m03)       <target dev='hdc' bus='scsi'/>
	I0930 11:13:19.183993   26946 main.go:141] libmachine: (ha-033260-m03)       <readonly/>
	I0930 11:13:19.184007   26946 main.go:141] libmachine: (ha-033260-m03)     </disk>
	I0930 11:13:19.184019   26946 main.go:141] libmachine: (ha-033260-m03)     <disk type='file' device='disk'>
	I0930 11:13:19.184028   26946 main.go:141] libmachine: (ha-033260-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:13:19.184041   26946 main.go:141] libmachine: (ha-033260-m03)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/ha-033260-m03.rawdisk'/>
	I0930 11:13:19.184052   26946 main.go:141] libmachine: (ha-033260-m03)       <target dev='hda' bus='virtio'/>
	I0930 11:13:19.184065   26946 main.go:141] libmachine: (ha-033260-m03)     </disk>
	I0930 11:13:19.184076   26946 main.go:141] libmachine: (ha-033260-m03)     <interface type='network'>
	I0930 11:13:19.184137   26946 main.go:141] libmachine: (ha-033260-m03)       <source network='mk-ha-033260'/>
	I0930 11:13:19.184167   26946 main.go:141] libmachine: (ha-033260-m03)       <model type='virtio'/>
	I0930 11:13:19.184179   26946 main.go:141] libmachine: (ha-033260-m03)     </interface>
	I0930 11:13:19.184187   26946 main.go:141] libmachine: (ha-033260-m03)     <interface type='network'>
	I0930 11:13:19.184197   26946 main.go:141] libmachine: (ha-033260-m03)       <source network='default'/>
	I0930 11:13:19.184205   26946 main.go:141] libmachine: (ha-033260-m03)       <model type='virtio'/>
	I0930 11:13:19.184215   26946 main.go:141] libmachine: (ha-033260-m03)     </interface>
	I0930 11:13:19.184223   26946 main.go:141] libmachine: (ha-033260-m03)     <serial type='pty'>
	I0930 11:13:19.184242   26946 main.go:141] libmachine: (ha-033260-m03)       <target port='0'/>
	I0930 11:13:19.184249   26946 main.go:141] libmachine: (ha-033260-m03)     </serial>
	I0930 11:13:19.184259   26946 main.go:141] libmachine: (ha-033260-m03)     <console type='pty'>
	I0930 11:13:19.184267   26946 main.go:141] libmachine: (ha-033260-m03)       <target type='serial' port='0'/>
	I0930 11:13:19.184277   26946 main.go:141] libmachine: (ha-033260-m03)     </console>
	I0930 11:13:19.184285   26946 main.go:141] libmachine: (ha-033260-m03)     <rng model='virtio'>
	I0930 11:13:19.184297   26946 main.go:141] libmachine: (ha-033260-m03)       <backend model='random'>/dev/random</backend>
	I0930 11:13:19.184305   26946 main.go:141] libmachine: (ha-033260-m03)     </rng>
	I0930 11:13:19.184313   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.184326   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.184337   26946 main.go:141] libmachine: (ha-033260-m03)   </devices>
	I0930 11:13:19.184344   26946 main.go:141] libmachine: (ha-033260-m03) </domain>
	I0930 11:13:19.184355   26946 main.go:141] libmachine: (ha-033260-m03) 
	I0930 11:13:19.191067   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:09:7f:ae in network default
	I0930 11:13:19.191719   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring networks are active...
	I0930 11:13:19.191738   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:19.192592   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring network default is active
	I0930 11:13:19.192924   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring network mk-ha-033260 is active
	I0930 11:13:19.193268   26946 main.go:141] libmachine: (ha-033260-m03) Getting domain xml...
	I0930 11:13:19.193941   26946 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:13:20.468738   26946 main.go:141] libmachine: (ha-033260-m03) Waiting to get IP...
	I0930 11:13:20.469515   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:20.469944   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:20.469970   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:20.469926   27716 retry.go:31] will retry after 232.398954ms: waiting for machine to come up
	I0930 11:13:20.704544   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:20.704996   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:20.705026   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:20.704955   27716 retry.go:31] will retry after 380.728938ms: waiting for machine to come up
	I0930 11:13:21.087407   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.087831   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.087853   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.087810   27716 retry.go:31] will retry after 405.871711ms: waiting for machine to come up
	I0930 11:13:21.495366   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.495857   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.495885   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.495810   27716 retry.go:31] will retry after 380.57456ms: waiting for machine to come up
	I0930 11:13:21.878262   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.878697   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.878718   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.878678   27716 retry.go:31] will retry after 486.639816ms: waiting for machine to come up
	I0930 11:13:22.367485   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:22.367998   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:22.368026   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:22.367946   27716 retry.go:31] will retry after 818.869274ms: waiting for machine to come up
	I0930 11:13:23.187832   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:23.188286   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:23.188306   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:23.188246   27716 retry.go:31] will retry after 870.541242ms: waiting for machine to come up
	I0930 11:13:24.060866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:24.061364   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:24.061403   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:24.061339   27716 retry.go:31] will retry after 1.026163442s: waiting for machine to come up
	I0930 11:13:25.089407   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:25.089859   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:25.089889   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:25.089789   27716 retry.go:31] will retry after 1.677341097s: waiting for machine to come up
	I0930 11:13:26.769716   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:26.770127   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:26.770173   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:26.770102   27716 retry.go:31] will retry after 2.102002194s: waiting for machine to come up
	I0930 11:13:28.873495   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:28.874089   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:28.874118   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:28.874042   27716 retry.go:31] will retry after 2.512249945s: waiting for machine to come up
	I0930 11:13:31.388375   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:31.388813   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:31.388842   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:31.388766   27716 retry.go:31] will retry after 3.025058152s: waiting for machine to come up
	I0930 11:13:34.415391   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:34.415806   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:34.415826   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:34.415764   27716 retry.go:31] will retry after 3.6491044s: waiting for machine to come up
	I0930 11:13:38.067512   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:38.067932   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:38.067957   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:38.067891   27716 retry.go:31] will retry after 5.462753525s: waiting for machine to come up
	I0930 11:13:43.535257   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.535767   26946 main.go:141] libmachine: (ha-033260-m03) Found IP for machine: 192.168.39.238
	I0930 11:13:43.535792   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.535800   26946 main.go:141] libmachine: (ha-033260-m03) Reserving static IP address...
	I0930 11:13:43.536253   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"} in network mk-ha-033260
	I0930 11:13:43.612168   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:13:43.612200   26946 main.go:141] libmachine: (ha-033260-m03) Reserved static IP address: 192.168.39.238
	I0930 11:13:43.612213   26946 main.go:141] libmachine: (ha-033260-m03) Waiting for SSH to be available...
	I0930 11:13:43.614758   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.615073   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260
	I0930 11:13:43.615102   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find defined IP address of network mk-ha-033260 interface with MAC address 52:54:00:f2:70:c8
	I0930 11:13:43.615180   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:13:43.615208   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:13:43.615240   26946 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:13:43.615252   26946 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:13:43.615269   26946 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:13:43.619189   26946 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: exit status 255: 
	I0930 11:13:43.619212   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 11:13:43.619222   26946 main.go:141] libmachine: (ha-033260-m03) DBG | command : exit 0
	I0930 11:13:43.619233   26946 main.go:141] libmachine: (ha-033260-m03) DBG | err     : exit status 255
	I0930 11:13:43.619246   26946 main.go:141] libmachine: (ha-033260-m03) DBG | output  : 
	I0930 11:13:46.621877   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:13:46.624327   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.624849   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.624873   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.625052   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:13:46.625075   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:13:46.625113   26946 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:13:46.625125   26946 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:13:46.625137   26946 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:13:46.749932   26946 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 11:13:46.750211   26946 main.go:141] libmachine: (ha-033260-m03) KVM machine creation complete!
	I0930 11:13:46.750551   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:46.751116   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:46.751371   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:46.751553   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:13:46.751568   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:13:46.752698   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:13:46.752714   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:13:46.752721   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:13:46.752728   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.755296   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.755714   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.755738   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.755877   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.756027   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.756136   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.756284   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.756448   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.756639   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.756651   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:13:46.857068   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:13:46.857090   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:13:46.857097   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.859904   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.860340   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.860372   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.860564   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.860899   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.861065   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.861200   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.861350   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.861511   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.861526   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:13:46.970453   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:13:46.970520   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:13:46.970534   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:13:46.970543   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:46.970766   26946 buildroot.go:166] provisioning hostname "ha-033260-m03"
	I0930 11:13:46.970791   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:46.970955   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.973539   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.973929   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.973956   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.974221   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.974372   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.974556   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.974665   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.974786   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.974938   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.974953   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m03 && echo "ha-033260-m03" | sudo tee /etc/hostname
	I0930 11:13:47.087604   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m03
	
	I0930 11:13:47.087636   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.090559   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.090866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.090895   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.091089   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.091283   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.091400   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.091516   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.091649   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.091811   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.091834   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:13:47.203919   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:13:47.203950   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:13:47.203969   26946 buildroot.go:174] setting up certificates
	I0930 11:13:47.203977   26946 provision.go:84] configureAuth start
	I0930 11:13:47.203986   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:47.204270   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:47.207236   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.207589   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.207618   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.207750   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.210196   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.210560   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.210587   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.210754   26946 provision.go:143] copyHostCerts
	I0930 11:13:47.210783   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:13:47.210816   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:13:47.210826   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:13:47.210895   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:13:47.210966   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:13:47.210983   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:13:47.210989   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:13:47.211013   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:13:47.211059   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:13:47.211076   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:13:47.211082   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:13:47.211104   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:13:47.211150   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m03 san=[127.0.0.1 192.168.39.238 ha-033260-m03 localhost minikube]
	I0930 11:13:47.437398   26946 provision.go:177] copyRemoteCerts
	I0930 11:13:47.437447   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:13:47.437470   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.440541   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.440922   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.440953   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.441156   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.441379   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.441583   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.441760   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:47.524024   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:13:47.524094   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:13:47.548921   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:13:47.548991   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:13:47.573300   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:13:47.573362   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:13:47.597885   26946 provision.go:87] duration metric: took 393.894244ms to configureAuth
	I0930 11:13:47.597913   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:13:47.598137   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:47.598221   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.600783   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.601100   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.601141   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.601308   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.601511   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.601694   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.601837   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.601988   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.602139   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.602153   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:13:47.824726   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:13:47.824757   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:13:47.824767   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetURL
	I0930 11:13:47.826205   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using libvirt version 6000000
	I0930 11:13:47.829313   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.829732   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.829758   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.829979   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:13:47.829995   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:13:47.830002   26946 client.go:171] duration metric: took 29.095541403s to LocalClient.Create
	I0930 11:13:47.830029   26946 start.go:167] duration metric: took 29.095609634s to libmachine.API.Create "ha-033260"
	I0930 11:13:47.830042   26946 start.go:293] postStartSetup for "ha-033260-m03" (driver="kvm2")
	I0930 11:13:47.830059   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:13:47.830080   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:47.830308   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:13:47.830331   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.832443   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.832840   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.832866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.833032   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.833204   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.833336   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.833448   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:47.911982   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:13:47.916413   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:13:47.916434   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:13:47.916512   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:13:47.916604   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:13:47.916615   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:13:47.916726   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:13:47.926360   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:13:47.951398   26946 start.go:296] duration metric: took 121.337458ms for postStartSetup
	I0930 11:13:47.951443   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:47.951959   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:47.954522   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.954882   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.954902   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.955203   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:47.955450   26946 start.go:128] duration metric: took 29.240250665s to createHost
	I0930 11:13:47.955475   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.957714   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.958054   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.958091   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.958262   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.958436   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.958562   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.958708   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.958822   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.958982   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.958994   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:13:48.062976   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694828.042605099
	
	I0930 11:13:48.062999   26946 fix.go:216] guest clock: 1727694828.042605099
	I0930 11:13:48.063009   26946 fix.go:229] Guest: 2024-09-30 11:13:48.042605099 +0000 UTC Remote: 2024-09-30 11:13:47.955462433 +0000 UTC m=+151.020514213 (delta=87.142666ms)
	I0930 11:13:48.063030   26946 fix.go:200] guest clock delta is within tolerance: 87.142666ms
	I0930 11:13:48.063037   26946 start.go:83] releasing machines lock for "ha-033260-m03", held for 29.347943498s
	I0930 11:13:48.063057   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.063295   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:48.065833   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.066130   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.066166   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.068440   26946 out.go:177] * Found network options:
	I0930 11:13:48.070194   26946 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3
	W0930 11:13:48.071578   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:13:48.071602   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:13:48.071621   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072253   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072426   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072506   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:13:48.072552   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:13:48.072605   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:13:48.072630   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:13:48.072698   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:13:48.072719   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:48.075267   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075365   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075641   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.075667   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075715   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.075746   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075778   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:48.075958   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:48.075973   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:48.076123   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:48.076126   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:48.076233   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:48.076311   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:48.076464   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:48.315424   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:13:48.322103   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:13:48.322167   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:13:48.340329   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:13:48.340354   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:13:48.340419   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:13:48.356866   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:13:48.372077   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:13:48.372139   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:13:48.387616   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:13:48.402259   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:13:48.523588   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:13:48.671634   26946 docker.go:233] disabling docker service ...
	I0930 11:13:48.671693   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:13:48.687483   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:13:48.702106   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:13:48.848121   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:13:48.976600   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:13:48.991745   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:13:49.014226   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:13:49.014303   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.025816   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:13:49.025892   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.038153   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.049762   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.061409   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:13:49.073521   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.084788   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.104074   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.116909   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:13:49.129116   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:13:49.129180   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:13:49.143704   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:13:49.155037   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:13:49.274882   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:13:49.369751   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:13:49.369822   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:13:49.375071   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:13:49.375129   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:13:49.379040   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:13:49.421444   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:13:49.421545   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:13:49.450271   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:13:49.481221   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:13:49.482604   26946 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:13:49.483828   26946 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:13:49.485093   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:49.488106   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:49.488528   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:49.488555   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:49.488791   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:13:49.493484   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:13:49.506933   26946 mustload.go:65] Loading cluster: ha-033260
	I0930 11:13:49.507212   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:49.507471   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:49.507506   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:49.522665   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0930 11:13:49.523038   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:49.523529   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:49.523558   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:49.523847   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:49.524064   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:13:49.525464   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:13:49.525875   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:49.525916   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:49.540657   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0930 11:13:49.541129   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:49.541659   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:49.541680   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:49.541991   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:49.542172   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:13:49.542336   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.238
	I0930 11:13:49.542347   26946 certs.go:194] generating shared ca certs ...
	I0930 11:13:49.542362   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.542476   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:13:49.542515   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:13:49.542525   26946 certs.go:256] generating profile certs ...
	I0930 11:13:49.542591   26946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:13:49.542615   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37
	I0930 11:13:49.542628   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:13:49.661476   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 ...
	I0930 11:13:49.661515   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37: {Name:mk149c204bf31f855e781b37ed00d2d45943dc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.661762   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37 ...
	I0930 11:13:49.661785   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37: {Name:mka1c6759c2661bfc3ab07f3168b7da60e9fc340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.661922   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:13:49.662094   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:13:49.662275   26946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:13:49.662294   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:13:49.662313   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:13:49.662333   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:13:49.662351   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:13:49.662368   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:13:49.662384   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:13:49.662452   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:13:49.677713   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:13:49.677801   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:13:49.677835   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:13:49.677845   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:13:49.677866   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:13:49.677888   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:13:49.677908   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:13:49.677944   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:13:49.677971   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:13:49.677983   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:13:49.677997   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:49.678030   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:13:49.681296   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:49.681887   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:13:49.681920   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:49.682144   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:13:49.682365   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:13:49.682543   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:13:49.682691   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:13:49.766051   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:13:49.771499   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:13:49.783878   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:13:49.789403   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:13:49.801027   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:13:49.806774   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:13:49.824334   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:13:49.828617   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:13:49.838958   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:13:49.843225   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:13:49.853655   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:13:49.857681   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:13:49.869752   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:13:49.897794   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:13:49.925363   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:13:49.951437   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:13:49.978863   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:13:50.005498   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:13:50.030426   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:13:50.055825   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:13:50.080625   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:13:50.113315   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:13:50.142931   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:13:50.168186   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:13:50.185792   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:13:50.203667   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:13:50.222202   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:13:50.241795   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:13:50.260704   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:13:50.278865   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:13:50.296763   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:13:50.303234   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:13:50.314412   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.319228   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.319276   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.325090   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:13:50.337510   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:13:50.351103   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.356273   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.356331   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.362227   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:13:50.373066   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:13:50.384243   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.388958   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.389012   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.394820   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:13:50.406295   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:13:50.410622   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:13:50.410674   26946 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.1 crio true true} ...
	I0930 11:13:50.410806   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:13:50.410833   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:13:50.410873   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:13:50.426800   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:13:50.426870   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:13:50.426931   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:13:50.437767   26946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 11:13:50.437827   26946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 11:13:50.448545   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 11:13:50.448565   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 11:13:50.448591   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:13:50.448597   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 11:13:50.448619   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:13:50.448655   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:13:50.448668   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:13:50.448599   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:13:50.460142   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 11:13:50.460178   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 11:13:50.460491   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 11:13:50.460521   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 11:13:50.475258   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:13:50.475370   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:13:50.603685   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 11:13:50.603734   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 11:13:51.331864   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:13:51.343111   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:13:51.361905   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:13:51.380114   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:13:51.398229   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:13:51.402565   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:13:51.414789   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:13:51.547939   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:13:51.568598   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:13:51.569032   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:51.569117   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:51.584541   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0930 11:13:51.585019   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:51.585485   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:51.585506   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:51.585824   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:51.586011   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:13:51.586156   26946 start.go:317] joinCluster: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:13:51.586275   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 11:13:51.586294   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:13:51.589730   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:51.590160   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:13:51.590189   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:51.590326   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:13:51.590673   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:13:51.590813   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:13:51.590943   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:13:51.742155   26946 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:13:51.742217   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ve4s5e.z27uafhrt4vwx76f --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443"
	I0930 11:14:14.534669   26946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ve4s5e.z27uafhrt4vwx76f --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443": (22.792425292s)
	I0930 11:14:14.534703   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 11:14:15.090933   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260-m03 minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=false
	I0930 11:14:15.217971   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-033260-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 11:14:15.356327   26946 start.go:319] duration metric: took 23.770167838s to joinCluster
	I0930 11:14:15.356406   26946 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:14:15.356782   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:14:15.358117   26946 out.go:177] * Verifying Kubernetes components...
	I0930 11:14:15.359571   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:14:15.622789   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:14:15.640897   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:14:15.641233   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:14:15.641327   26946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:14:15.641657   26946 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:14:15.641759   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:15.641771   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:15.641783   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:15.641790   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:15.644778   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:16.142790   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:16.142817   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:16.142829   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:16.142842   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:16.146568   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:16.642107   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:16.642131   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:16.642142   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:16.642147   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:16.648466   26946 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:14:17.142339   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:17.142362   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:17.142375   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:17.142381   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:17.146498   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:17.642900   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:17.642921   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:17.642930   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:17.642934   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:17.646792   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:17.647749   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:18.141856   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:18.141880   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:18.141889   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:18.141893   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:18.145059   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:18.641848   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:18.641883   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:18.641896   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:18.641905   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:18.645609   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:19.142000   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:19.142030   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:19.142041   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:19.142046   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:19.146124   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:19.642709   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:19.642734   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:19.642746   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:19.642751   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:19.647278   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:19.648375   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:20.142851   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:20.142871   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:20.142879   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:20.142883   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:20.146328   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:20.642913   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:20.642940   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:20.642954   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:20.642961   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:20.653974   26946 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:14:21.142909   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:21.142931   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:21.142942   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:21.142954   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:21.146862   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:21.642348   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:21.642373   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:21.642383   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:21.642388   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:21.647699   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:14:22.142178   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:22.142198   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:22.142206   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:22.142210   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:22.145760   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:22.146824   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:22.642895   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:22.642917   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:22.642925   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:22.642931   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:22.648085   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:14:23.141847   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:23.141872   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:23.141883   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:23.141888   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:23.149699   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:23.641992   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:23.642013   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:23.642023   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:23.642029   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:23.645640   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:24.142073   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:24.142096   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:24.142104   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:24.142108   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:24.146322   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:24.146891   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:24.642695   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:24.642716   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:24.642724   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:24.642731   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:24.646216   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:25.142500   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:25.142538   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:25.142546   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:25.142552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:25.146687   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:25.642542   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:25.642566   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:25.642573   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:25.642577   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:25.646661   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:26.142499   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:26.142535   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:26.142545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:26.142552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:26.146202   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:26.147018   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:26.642712   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:26.642739   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:26.642751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:26.642756   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:26.646338   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:27.142246   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:27.142276   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:27.142286   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:27.142292   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:27.146473   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:27.642325   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:27.642347   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:27.642355   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:27.642359   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:27.646109   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:28.142885   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:28.142912   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:28.142923   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:28.142929   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:28.146499   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:28.147250   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:28.642625   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:28.642652   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:28.642663   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:28.642669   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:28.646618   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:29.142391   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:29.142412   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:29.142420   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:29.142424   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:29.146320   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:29.642615   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:29.642640   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:29.642649   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:29.642653   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:29.646130   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.142916   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:30.142938   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:30.142947   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:30.142951   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:30.146109   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.642863   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:30.642885   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:30.642893   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:30.642897   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:30.646458   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.647204   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:31.142601   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:31.142623   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.142631   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.142635   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.146623   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.642077   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:31.642103   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.642114   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.642119   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.645322   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.645964   26946 node_ready.go:49] node "ha-033260-m03" has status "Ready":"True"
	I0930 11:14:31.645987   26946 node_ready.go:38] duration metric: took 16.004306964s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:14:31.645997   26946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:14:31.646075   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:31.646090   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.646099   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.646106   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.653396   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:31.663320   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.663400   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:14:31.663405   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.663412   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.663420   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.666829   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.667522   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.667537   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.667544   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.667550   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.670668   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.671278   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.671301   26946 pod_ready.go:82] duration metric: took 7.951059ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.671309   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.671362   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:14:31.671369   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.671376   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.671383   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.674317   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:31.675093   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.675107   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.675114   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.675120   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.678167   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.678702   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.678717   26946 pod_ready.go:82] duration metric: took 7.402263ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.678725   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.678775   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:14:31.678782   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.678789   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.678794   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.682042   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.683033   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.683050   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.683060   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.683067   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.686124   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.686928   26946 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.686944   26946 pod_ready.go:82] duration metric: took 8.212366ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.686951   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.687047   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:14:31.687059   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.687068   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.687077   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.690190   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.690825   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:31.690840   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.690850   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.690858   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.693597   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:31.694016   26946 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.694032   26946 pod_ready.go:82] duration metric: took 7.073598ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.694050   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.842476   26946 request.go:632] Waited for 148.347924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:14:31.842535   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:14:31.842540   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.842547   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.842551   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.846779   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:32.042378   26946 request.go:632] Waited for 194.977116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:32.042433   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:32.042441   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.042451   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.042460   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.046938   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:32.047883   26946 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.047901   26946 pod_ready.go:82] duration metric: took 353.843104ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.047915   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.242541   26946 request.go:632] Waited for 194.549595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:14:32.242605   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:14:32.242614   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.242625   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.242634   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.246270   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.443112   26946 request.go:632] Waited for 196.194005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:32.443180   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:32.443188   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.443196   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.443204   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.446839   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.447484   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.447503   26946 pod_ready.go:82] duration metric: took 399.580784ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.447514   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.642591   26946 request.go:632] Waited for 194.994624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:14:32.642658   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:14:32.642663   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.642670   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.642674   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.646484   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.842626   26946 request.go:632] Waited for 195.406068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:32.842682   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:32.842700   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.842723   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.842729   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.846693   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.847589   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.847611   26946 pod_ready.go:82] duration metric: took 400.088499ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.847622   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.042743   26946 request.go:632] Waited for 195.040991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:14:33.042794   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:14:33.042810   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.042822   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.042831   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.047437   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:33.242766   26946 request.go:632] Waited for 194.350243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:33.242826   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:33.242831   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.242838   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.242842   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.246530   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.247420   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:33.247442   26946 pod_ready.go:82] duration metric: took 399.811844ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.247458   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.442488   26946 request.go:632] Waited for 194.945176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:14:33.442539   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:14:33.442545   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.442552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.442555   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.446162   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.642540   26946 request.go:632] Waited for 195.369281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:33.642603   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:33.642609   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.642615   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.642620   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.646221   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.646635   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:33.646655   26946 pod_ready.go:82] duration metric: took 399.188776ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.646667   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.843125   26946 request.go:632] Waited for 196.391494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:14:33.843216   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:14:33.843227   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.843238   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.843244   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.846706   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.042579   26946 request.go:632] Waited for 195.024865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.042680   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.042689   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.042697   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.042701   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.046091   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.046788   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.046810   26946 pod_ready.go:82] duration metric: took 400.13538ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.046823   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.242282   26946 request.go:632] Waited for 195.389369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:14:34.242349   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:14:34.242356   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.242365   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.242370   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.246179   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.442166   26946 request.go:632] Waited for 195.280581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:34.442224   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:34.442230   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.442237   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.442240   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.445326   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.445954   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.445978   26946 pod_ready.go:82] duration metric: took 399.145783ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.445991   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.643049   26946 request.go:632] Waited for 196.981464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:14:34.643124   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:14:34.643131   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.643141   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.643148   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.647040   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.843108   26946 request.go:632] Waited for 195.398341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.843190   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.843212   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.843227   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.843238   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.846825   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.847411   26946 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.847432   26946 pod_ready.go:82] duration metric: took 401.432801ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.847445   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.043014   26946 request.go:632] Waited for 195.507309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:14:35.043093   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:14:35.043102   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.043109   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.043117   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.046836   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.242781   26946 request.go:632] Waited for 195.218665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:35.242851   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:35.242856   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.242862   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.242866   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.246468   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.247353   26946 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:35.247380   26946 pod_ready.go:82] duration metric: took 399.923772ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.247393   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.442345   26946 request.go:632] Waited for 194.883869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:14:35.442516   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:14:35.442529   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.442541   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.442550   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.446031   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.642937   26946 request.go:632] Waited for 196.342972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:35.642985   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:35.642990   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.642997   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.643001   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.646624   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.647369   26946 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:35.647389   26946 pod_ready.go:82] duration metric: took 399.989175ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.647398   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.842485   26946 request.go:632] Waited for 195.020246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:14:35.842575   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:14:35.842586   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.842597   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.842605   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.845997   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.043063   26946 request.go:632] Waited for 196.343615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:36.043113   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:36.043119   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.043125   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.043131   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.046327   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.046783   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.046799   26946 pod_ready.go:82] duration metric: took 399.395226ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.046810   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.242936   26946 request.go:632] Waited for 196.062784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:14:36.243003   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:14:36.243024   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.243037   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.243046   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.246888   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.442803   26946 request.go:632] Waited for 195.27104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:36.442859   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:36.442867   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.442877   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.442888   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.446304   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.446972   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.447001   26946 pod_ready.go:82] duration metric: took 400.183775ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.447011   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.642468   26946 request.go:632] Waited for 195.395201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:14:36.642532   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:14:36.642538   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.642545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.642549   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.646175   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.842841   26946 request.go:632] Waited for 195.970164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:36.842911   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:36.842924   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.842938   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.842946   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.846452   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.847134   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.847153   26946 pod_ready.go:82] duration metric: took 400.136505ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.847163   26946 pod_ready.go:39] duration metric: took 5.201155018s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:14:36.847177   26946 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:14:36.847229   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:14:36.869184   26946 api_server.go:72] duration metric: took 21.512734614s to wait for apiserver process to appear ...
	I0930 11:14:36.869210   26946 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:14:36.869231   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:14:36.875656   26946 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:14:36.875723   26946 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:14:36.875730   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.875741   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.875751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.876680   26946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:14:36.876763   26946 api_server.go:141] control plane version: v1.31.1
	I0930 11:14:36.876785   26946 api_server.go:131] duration metric: took 7.567961ms to wait for apiserver health ...
	I0930 11:14:36.876795   26946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:14:37.042474   26946 request.go:632] Waited for 165.583212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.042549   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.042557   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.042568   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.042577   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.049247   26946 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:14:37.056036   26946 system_pods.go:59] 24 kube-system pods found
	I0930 11:14:37.056063   26946 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:14:37.056069   26946 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:14:37.056073   26946 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:14:37.056076   26946 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:14:37.056079   26946 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:14:37.056082   26946 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:14:37.056085   26946 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:14:37.056088   26946 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:14:37.056091   26946 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:14:37.056094   26946 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:14:37.056097   26946 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:14:37.056100   26946 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:14:37.056105   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:14:37.056108   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:14:37.056111   26946 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:14:37.056115   26946 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:14:37.056120   26946 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:14:37.056151   26946 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:14:37.056164   26946 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:14:37.056169   26946 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:14:37.056177   26946 system_pods.go:61] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:14:37.056182   26946 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:14:37.056189   26946 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:14:37.056194   26946 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:14:37.056204   26946 system_pods.go:74] duration metric: took 179.399341ms to wait for pod list to return data ...
	I0930 11:14:37.056216   26946 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:14:37.242741   26946 request.go:632] Waited for 186.4192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:14:37.242795   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:14:37.242800   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.242807   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.242813   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.247153   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:37.247269   26946 default_sa.go:45] found service account: "default"
	I0930 11:14:37.247285   26946 default_sa.go:55] duration metric: took 191.060236ms for default service account to be created ...
	I0930 11:14:37.247292   26946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:14:37.442756   26946 request.go:632] Waited for 195.39174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.442830   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.442840   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.442850   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.442861   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.450094   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:37.457440   26946 system_pods.go:86] 24 kube-system pods found
	I0930 11:14:37.457477   26946 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:14:37.457485   26946 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:14:37.457491   26946 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:14:37.457497   26946 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:14:37.457506   26946 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:14:37.457512   26946 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:14:37.457518   26946 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:14:37.457524   26946 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:14:37.457530   26946 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:14:37.457538   26946 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:14:37.457547   26946 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:14:37.457553   26946 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:14:37.457562   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:14:37.457569   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:14:37.457575   26946 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:14:37.457584   26946 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:14:37.457590   26946 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:14:37.457597   26946 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:14:37.457603   26946 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:14:37.457612   26946 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:14:37.457630   26946 system_pods.go:89] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:14:37.457637   26946 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:14:37.457643   26946 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:14:37.457648   26946 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:14:37.457657   26946 system_pods.go:126] duration metric: took 210.359061ms to wait for k8s-apps to be running ...
	I0930 11:14:37.457669   26946 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:14:37.457721   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:14:37.476929   26946 system_svc.go:56] duration metric: took 19.252575ms WaitForService to wait for kubelet
	I0930 11:14:37.476958   26946 kubeadm.go:582] duration metric: took 22.120515994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:14:37.476982   26946 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:14:37.642377   26946 request.go:632] Waited for 165.309074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:14:37.642424   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:14:37.642429   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.642438   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.642449   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.646747   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:37.647864   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647885   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647896   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647900   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647904   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647908   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647912   26946 node_conditions.go:105] duration metric: took 170.925329ms to run NodePressure ...
	I0930 11:14:37.647922   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:14:37.647945   26946 start.go:255] writing updated cluster config ...
	I0930 11:14:37.648212   26946 ssh_runner.go:195] Run: rm -f paused
	I0930 11:14:37.699426   26946 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:14:37.701518   26946 out.go:177] * Done! kubectl is now configured to use "ha-033260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.351740523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=654d8294-2d2b-467d-9ba7-d0ff5f661ba9 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.353538027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffe6ee2a-ddb5-4e73-95d3-feb8e29b4a96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.354006439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695101353980568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffe6ee2a-ddb5-4e73-95d3-feb8e29b4a96 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.354796259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f0e2775-0ace-48de-bd98-cf864d6706b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.354871266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f0e2775-0ace-48de-bd98-cf864d6706b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.355098840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f0e2775-0ace-48de-bd98-cf864d6706b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.402215745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=664eafb2-fb09-42fb-bd62-71cb9723c633 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.402949005Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=664eafb2-fb09-42fb-bd62-71cb9723c633 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.404851190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c25174a-eb49-438d-b2bd-dfbce550cb2a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.405395332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695101405369137,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c25174a-eb49-438d-b2bd-dfbce550cb2a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.406333121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=859a3efb-197c-46f3-8793-b501459e7be9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.406406101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=859a3efb-197c-46f3-8793-b501459e7be9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.406700321Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=859a3efb-197c-46f3-8793-b501459e7be9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.443841665Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a5ef37d-fcf9-4237-8e44-86a247488570 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.444092538Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-nbhwc,Uid:e62e1e44-3723-496c-85a3-7a79e9c8264b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694878999607270,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:14:38.675928095Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5frmm,Uid:7333717d-95d5-4990-bac9-8443a51eee97,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1727694738389437732,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:12:18.075315913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:964381ab-f2ac-4361-a7e0-5212fff5e26e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694738388233227,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T11:12:18.074715472Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kt87v,Uid:26f75c31-d44d-4a4c-8048-b6ce5c824151,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727694738374333861,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:12:18.066691994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&PodSandboxMetadata{Name:kube-proxy-mxvxr,Uid:314da0b5-6242-4af0-8e99-d0aaba82a43e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694726485116537,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-09-30T11:12:04.378056083Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&PodSandboxMetadata{Name:kindnet-g94k6,Uid:260e385d-9e17-4af8-a854-8683afb714c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694726174379745,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:12:04.361135889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-033260,Uid:43955f8cf95999657a88952585c93768,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694713030111436,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 43955f8cf95999657a88952585c93768,kubernetes.io/config.seen: 2024-09-30T11:11:52.558361791Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-033260,Uid:bc91a2a25badfe2ca88893e1f6ac643a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694713027013730,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{kube
rnetes.io/config.hash: bc91a2a25badfe2ca88893e1f6ac643a,kubernetes.io/config.seen: 2024-09-30T11:11:52.558364200Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-033260,Uid:6c1732ebd63e52d0c6ac6d9cd648cff5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694713022237134,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.249:8443,kubernetes.io/config.hash: 6c1732ebd63e52d0c6ac6d9cd648cff5,kubernetes.io/config.seen: 2024-09-30T11:11:52.558360466Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3
941a9c701561b0d3d113ef8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-033260,Uid:734999721cb3f48c24354599fcaf3db2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694713020603401,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 734999721cb3f48c24354599fcaf3db2,kubernetes.io/config.seen: 2024-09-30T11:11:52.558362997Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&PodSandboxMetadata{Name:etcd-ha-033260,Uid:4ee6f0cb154890b5d1bf6173256957d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727694713017992899,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-033260,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.249:2379,kubernetes.io/config.hash: 4ee6f0cb154890b5d1bf6173256957d4,kubernetes.io/config.seen: 2024-09-30T11:11:52.558355509Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2a5ef37d-fcf9-4237-8e44-86a247488570 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.444808077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2c0908a-09ff-47e4-9a05-84ff4d21ff1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.444866724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2c0908a-09ff-47e4-9a05-84ff4d21ff1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.445146992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2c0908a-09ff-47e4-9a05-84ff4d21ff1a name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.450328327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4b72448-bf0a-41b9-97ac-84aa0dc16020 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.450393391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4b72448-bf0a-41b9-97ac-84aa0dc16020 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.452165790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae4bbdcb-1132-495e-9659-2bd7d5864af9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.452569286Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695101452546894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae4bbdcb-1132-495e-9659-2bd7d5864af9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.453137394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a099b8f2-e630-45f7-9dff-b2f487c5ce07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.453206399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a099b8f2-e630-45f7-9dff-b2f487c5ce07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:21 ha-033260 crio[660]: time="2024-09-30 11:18:21.453457550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a099b8f2-e630-45f7-9dff-b2f487c5ce07 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	970aed3b1f96b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e5a4e140afd6a       busybox-7dff88458-nbhwc
	856f46390ed07       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   ee2a6eb69b10a       coredns-7c65d6cfc9-kt87v
	f612e29e1b4eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   571ace347c86d       storage-provisioner
	2aac013f37bf9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   724d02dce7a0d       coredns-7c65d6cfc9-5frmm
	347597ebf9b20       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   b08b772dab41d       kube-proxy-mxvxr
	6cf899810e161       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   b2990036962da       kindnet-g94k6
	7a9e01197e5c6       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2bd722c6afa63       kube-vip-ha-033260
	aa8ecc81d0af2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   f789f882a4d3c       etcd-ha-033260
	e62c0a6cc031f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6bdfa51706557       kube-controller-manager-ha-033260
	2435a21a0f6f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   fd27dbf29ee9b       kube-scheduler-ha-033260
	cd2027f0a04e1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   676d3fbaf3e6f       kube-apiserver-ha-033260
	
	
	==> coredns [2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7] <==
	[INFO] 10.244.1.2:53856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00078279s
	[INFO] 10.244.0.4:40457 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001984462s
	[INFO] 10.244.2.2:53822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006986108s
	[INFO] 10.244.2.2:56668 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001677174s
	[INFO] 10.244.1.2:39538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172765s
	[INFO] 10.244.1.2:52635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028646205s
	[INFO] 10.244.1.2:41853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176274s
	[INFO] 10.244.1.2:35962 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170835s
	[INFO] 10.244.0.4:41550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130972s
	[INFO] 10.244.0.4:32938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173381s
	[INFO] 10.244.0.4:56409 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073902s
	[INFO] 10.244.2.2:58163 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268677s
	[INFO] 10.244.2.2:36365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010796s
	[INFO] 10.244.2.2:56656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115088s
	[INFO] 10.244.2.2:56306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139171s
	[INFO] 10.244.1.2:35824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200215s
	[INFO] 10.244.1.2:55897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096777s
	[INFO] 10.244.1.2:41692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109849s
	[INFO] 10.244.0.4:40290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106794s
	[INFO] 10.244.0.4:46779 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132069s
	[INFO] 10.244.1.2:51125 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201243s
	[INFO] 10.244.1.2:54698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184568s
	[INFO] 10.244.0.4:53882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193917s
	[INFO] 10.244.0.4:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121126s
	[INFO] 10.244.2.2:58238 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117978s
	
	
	==> coredns [856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0] <==
	[INFO] 10.244.1.2:57277 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000266561s
	[INFO] 10.244.1.2:48530 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000385853s
	[INFO] 10.244.0.4:37489 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002109336s
	[INFO] 10.244.0.4:53881 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132699s
	[INFO] 10.244.0.4:35131 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120989s
	[INFO] 10.244.0.4:53761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001344827s
	[INFO] 10.244.0.4:59481 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051804s
	[INFO] 10.244.2.2:39523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137336s
	[INFO] 10.244.2.2:35477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002190323s
	[INFO] 10.244.2.2:37515 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001525471s
	[INFO] 10.244.2.2:34201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119381s
	[INFO] 10.244.1.2:42886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230949s
	[INFO] 10.244.0.4:43156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079033s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010674s
	[INFO] 10.244.2.2:47730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245903s
	[INFO] 10.244.2.2:54559 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165285s
	[INFO] 10.244.2.2:56225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115859s
	[INFO] 10.244.2.2:54334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001069s
	[INFO] 10.244.1.2:43809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130742s
	[INFO] 10.244.1.2:56685 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199555s
	[INFO] 10.244.0.4:44188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154269s
	[INFO] 10.244.0.4:56530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138351s
	[INFO] 10.244.2.2:34814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138709s
	[INFO] 10.244.2.2:49549 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124443s
	[INFO] 10.244.2.2:35669 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100712s
	
	
	==> describe nodes <==
	Name:               ha-033260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:12:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-033260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 285e64dc8d10442694303513a400e333
	  System UUID:                285e64dc-8d10-4426-9430-3513a400e333
	  Boot ID:                    e1ab2d78-3004-455b-b8b3-86a48689299f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbhwc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-7c65d6cfc9-5frmm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-kt87v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-033260                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-g94k6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-033260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-033260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-mxvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-033260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-033260                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  NodeReady                6m3s   kubelet          Node ha-033260 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           4m1s   node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	
	
	Name:               ha-033260-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:12:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:15:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-033260-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1504aa96b0e7414e83ec57ce754ea274
	  System UUID:                1504aa96-b0e7-414e-83ec-57ce754ea274
	  Boot ID:                    08e05cdc-874f-4f82-99d4-84bb26fd07ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-748nr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-033260-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m25s
	  kube-system                 kindnet-752cm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-apiserver-ha-033260-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-033260-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-fckwn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-ha-033260-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-033260-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-033260-m02 status is now: NodeNotReady
	
	
	Name:               ha-033260-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-033260-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 581b37e2b76245bf813ddd1801a6b9a3
	  System UUID:                581b37e2-b762-45bf-813d-dd1801a6b9a3
	  Boot ID:                    92c7790b-7ee9-43e4-b1b8-fd69ae5fa989
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkczc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-033260-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m8s
	  kube-system                 kindnet-4rpgw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m10s
	  kube-system                 kube-apiserver-ha-033260-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-033260-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-fctld                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-scheduler-ha-033260-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-033260-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	
	
	Name:               ha-033260-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-033260-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f7e5ab5969e49808de6a4938b82b604
	  System UUID:                3f7e5ab5-969e-4980-8de6-a4938b82b604
	  Boot ID:                    15a5a2bf-b69b-4b89-b5f2-f6529ae084b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kb2cp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-cr58q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-033260-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 11:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050905] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040385] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.839402] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.653040] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.597753] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.651623] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058580] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170861] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.144465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.293344] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.055212] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.356595] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.065791] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.315036] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.090322] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 11:12] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.137075] kauditd_printk_skb: 38 callbacks suppressed
	[Sep30 11:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8] <==
	{"level":"warn","ts":"2024-09-30T11:18:21.712437Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.719865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.723937Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.732858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.739531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.749931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.750895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.754533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.758563Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.764734Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.771975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.778576Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.782856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.786395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.792288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.798519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.807611Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.809860Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.812311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.813234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.817504Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.821143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.827143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.833725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:21.850771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:18:21 up 7 min,  0 users,  load average: 0.31, 0.18, 0.08
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346] <==
	I0930 11:17:47.860978       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:17:57.862776       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:17:57.862852       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:17:57.862995       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:17:57.863020       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:17:57.863078       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:17:57.863084       1 main.go:299] handling current node
	I0930 11:17:57.863098       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:17:57.863102       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:07.854593       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:18:07.854770       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:07.854951       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:18:07.854979       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:18:07.855034       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:18:07.855052       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:18:07.855106       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:18:07.855130       1 main.go:299] handling current node
	I0930 11:18:17.860759       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:18:17.860855       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:18:17.860991       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:18:17.861014       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:18:17.861065       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:18:17.861084       1 main.go:299] handling current node
	I0930 11:18:17.861114       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:18:17.861129       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac] <==
	I0930 11:11:58.463989       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0930 11:11:58.477865       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249]
	I0930 11:11:58.479372       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 11:11:58.487328       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 11:11:58.586099       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 11:11:59.517972       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 11:11:59.542879       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 11:11:59.558820       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 11:12:04.282712       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0930 11:12:04.376507       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0930 11:14:41.794861       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58556: use of closed network connection
	E0930 11:14:41.976585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58584: use of closed network connection
	E0930 11:14:42.175263       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58602: use of closed network connection
	E0930 11:14:42.398453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58626: use of closed network connection
	E0930 11:14:42.598999       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58646: use of closed network connection
	E0930 11:14:42.786264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58670: use of closed network connection
	E0930 11:14:42.985795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58688: use of closed network connection
	E0930 11:14:43.164451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58700: use of closed network connection
	E0930 11:14:43.352582       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58708: use of closed network connection
	E0930 11:14:43.634509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58726: use of closed network connection
	E0930 11:14:43.812335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58746: use of closed network connection
	E0930 11:14:44.006684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58766: use of closed network connection
	E0930 11:14:44.194031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58782: use of closed network connection
	E0930 11:14:44.561371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58814: use of closed network connection
	W0930 11:16:08.485734       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.249]
	
	
	==> kube-controller-manager [e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489] <==
	I0930 11:15:14.593101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.593158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.605401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.879876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:15.297330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:16.002721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.158455       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.429273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.922721       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-033260-m04"
	I0930 11:15:18.922856       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:19.229459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:24.734460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:34.561602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:34.561906       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:15:34.575771       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:35.966445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:45.204985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:16:30.993129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:30.994314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:16:31.023898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:31.050052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.150574ms"
	I0930 11:16:31.050219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.36µs"
	I0930 11:16:31.218479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:34.045967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:36.316239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	
	
	==> kube-proxy [347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:12:06.949025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:12:06.986064       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0930 11:12:06.986193       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:12:07.041171       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:12:07.041238       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:12:07.041262       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:12:07.044020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:12:07.044727       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:12:07.044757       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:12:07.047853       1 config.go:199] "Starting service config controller"
	I0930 11:12:07.048187       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:12:07.048613       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:12:07.048700       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:12:07.051971       1 config.go:328] "Starting node config controller"
	I0930 11:12:07.052033       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:12:07.148982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:12:07.149026       1 shared_informer.go:320] Caches are synced for service config
	I0930 11:12:07.152927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2] <==
	I0930 11:11:59.743507       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:14:38.641000       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkczc\": pod busybox-7dff88458-rkczc is already assigned to node \"ha-033260-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rkczc" node="ha-033260-m03"
	E0930 11:14:38.642588       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 12532e14-b4c0-4c7d-ab93-e96698fbc986(default/busybox-7dff88458-rkczc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rkczc"
	E0930 11:14:38.642720       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkczc\": pod busybox-7dff88458-rkczc is already assigned to node \"ha-033260-m03\"" pod="default/busybox-7dff88458-rkczc"
	I0930 11:14:38.642772       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rkczc" node="ha-033260-m03"
	E0930 11:14:38.700019       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nbhwc\": pod busybox-7dff88458-nbhwc is already assigned to node \"ha-033260\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nbhwc" node="ha-033260"
	E0930 11:14:38.700408       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e62e1e44-3723-496c-85a3-7a79e9c8264b(default/busybox-7dff88458-nbhwc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nbhwc"
	E0930 11:14:38.700579       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nbhwc\": pod busybox-7dff88458-nbhwc is already assigned to node \"ha-033260\"" pod="default/busybox-7dff88458-nbhwc"
	I0930 11:14:38.700685       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nbhwc" node="ha-033260"
	E0930 11:14:38.701396       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-748nr\": pod busybox-7dff88458-748nr is already assigned to node \"ha-033260-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-748nr" node="ha-033260-m02"
	E0930 11:14:38.701487       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 004c0140-b81f-4e7b-aa0d-0aa6f7403351(default/busybox-7dff88458-748nr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-748nr"
	E0930 11:14:38.701528       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-748nr\": pod busybox-7dff88458-748nr is already assigned to node \"ha-033260-m02\"" pod="default/busybox-7dff88458-748nr"
	I0930 11:14:38.701566       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-748nr" node="ha-033260-m02"
	E0930 11:15:14.650435       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mkbm9\": pod kube-proxy-mkbm9 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mkbm9" node="ha-033260-m04"
	E0930 11:15:14.650543       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mkbm9\": pod kube-proxy-mkbm9 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-mkbm9"
	E0930 11:15:14.687957       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.688017       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c071322f-794b-4d6f-a33a-92077352ef5d(kube-system/kindnet-kb2cp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kb2cp"
	E0930 11:15:14.688032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-kb2cp"
	I0930 11:15:14.688047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.701899       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nbts6" node="ha-033260-m04"
	E0930 11:15:14.702003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-nbts6"
	E0930 11:15:14.702565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:15:14.705542       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2de7434-03f1-4bbc-ab62-3101483908c1(kube-system/kube-proxy-cr58q) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-cr58q"
	E0930 11:15:14.705602       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-cr58q"
	I0930 11:15:14.705671       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	
	
	==> kubelet <==
	Sep 30 11:16:59 ha-033260 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:16:59 ha-033260 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:16:59 ha-033260 kubelet[1307]: E0930 11:16:59.603405    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695019602992032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:16:59 ha-033260 kubelet[1307]: E0930 11:16:59.603474    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695019602992032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:09 ha-033260 kubelet[1307]: E0930 11:17:09.605544    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695029605156885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:09 ha-033260 kubelet[1307]: E0930 11:17:09.605573    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695029605156885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:19 ha-033260 kubelet[1307]: E0930 11:17:19.607869    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695039607317316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:19 ha-033260 kubelet[1307]: E0930 11:17:19.608153    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695039607317316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:29 ha-033260 kubelet[1307]: E0930 11:17:29.611241    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695049610444192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:29 ha-033260 kubelet[1307]: E0930 11:17:29.611290    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695049610444192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:39 ha-033260 kubelet[1307]: E0930 11:17:39.612829    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695059612275436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:39 ha-033260 kubelet[1307]: E0930 11:17:39.613366    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695059612275436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:49 ha-033260 kubelet[1307]: E0930 11:17:49.615817    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695069615300757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:49 ha-033260 kubelet[1307]: E0930 11:17:49.616359    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695069615300757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.469234    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:17:59 ha-033260 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.620277    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695079619430930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.620330    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695079619430930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:09 ha-033260 kubelet[1307]: E0930 11:18:09.622386    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695089621956899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:09 ha-033260 kubelet[1307]: E0930 11:18:09.622824    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695089621956899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:19 ha-033260 kubelet[1307]: E0930 11:18:19.628964    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695099627068358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:19 ha-033260 kubelet[1307]: E0930 11:18:19.629013    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695099627068358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:261: (dbg) Run:  kubectl --context ha-033260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr: (4.052471393s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.536817322s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m03_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:11:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:11:16.968147   26946 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:11:16.968259   26946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:11:16.968268   26946 out.go:358] Setting ErrFile to fd 2...
	I0930 11:11:16.968272   26946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:11:16.968475   26946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:11:16.969014   26946 out.go:352] Setting JSON to false
	I0930 11:11:16.969874   26946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3224,"bootTime":1727691453,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:11:16.969971   26946 start.go:139] virtualization: kvm guest
	I0930 11:11:16.972340   26946 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:11:16.973700   26946 notify.go:220] Checking for updates...
	I0930 11:11:16.973712   26946 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:11:16.975164   26946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:11:16.976567   26946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:11:16.977791   26946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:16.978971   26946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:11:16.980151   26946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:11:16.981437   26946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:11:17.016837   26946 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 11:11:17.017911   26946 start.go:297] selected driver: kvm2
	I0930 11:11:17.017921   26946 start.go:901] validating driver "kvm2" against <nil>
	I0930 11:11:17.017932   26946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:11:17.018657   26946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:11:17.018742   26946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:11:17.034306   26946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:11:17.034349   26946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 11:11:17.034586   26946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:11:17.034614   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:11:17.034651   26946 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0930 11:11:17.034662   26946 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0930 11:11:17.034717   26946 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0930 11:11:17.034818   26946 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:11:17.036732   26946 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:11:17.037780   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:11:17.037816   26946 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:11:17.037823   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:11:17.037892   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:11:17.037903   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:11:17.038215   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:11:17.038236   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json: {Name:mkb40a3a18f0ab7d52c306f0204aa0e145307acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:17.038367   26946 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:11:17.038394   26946 start.go:364] duration metric: took 15.009µs to acquireMachinesLock for "ha-033260"
	I0930 11:11:17.038414   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:11:17.038466   26946 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 11:11:17.039863   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:11:17.039975   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:11:17.040024   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:11:17.054681   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0930 11:11:17.055106   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:11:17.055654   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:11:17.055673   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:11:17.056010   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:11:17.056264   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:17.056403   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:17.056571   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:11:17.056596   26946 client.go:168] LocalClient.Create starting
	I0930 11:11:17.056623   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:11:17.056664   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:11:17.056676   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:11:17.056725   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:11:17.056743   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:11:17.056752   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:11:17.056765   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:11:17.056773   26946 main.go:141] libmachine: (ha-033260) Calling .PreCreateCheck
	I0930 11:11:17.057093   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:17.057527   26946 main.go:141] libmachine: Creating machine...
	I0930 11:11:17.057540   26946 main.go:141] libmachine: (ha-033260) Calling .Create
	I0930 11:11:17.057672   26946 main.go:141] libmachine: (ha-033260) Creating KVM machine...
	I0930 11:11:17.058923   26946 main.go:141] libmachine: (ha-033260) DBG | found existing default KVM network
	I0930 11:11:17.059559   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.059428   26970 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I0930 11:11:17.059596   26946 main.go:141] libmachine: (ha-033260) DBG | created network xml: 
	I0930 11:11:17.059615   26946 main.go:141] libmachine: (ha-033260) DBG | <network>
	I0930 11:11:17.059621   26946 main.go:141] libmachine: (ha-033260) DBG |   <name>mk-ha-033260</name>
	I0930 11:11:17.059629   26946 main.go:141] libmachine: (ha-033260) DBG |   <dns enable='no'/>
	I0930 11:11:17.059635   26946 main.go:141] libmachine: (ha-033260) DBG |   
	I0930 11:11:17.059640   26946 main.go:141] libmachine: (ha-033260) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0930 11:11:17.059646   26946 main.go:141] libmachine: (ha-033260) DBG |     <dhcp>
	I0930 11:11:17.059651   26946 main.go:141] libmachine: (ha-033260) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0930 11:11:17.059658   26946 main.go:141] libmachine: (ha-033260) DBG |     </dhcp>
	I0930 11:11:17.059663   26946 main.go:141] libmachine: (ha-033260) DBG |   </ip>
	I0930 11:11:17.059667   26946 main.go:141] libmachine: (ha-033260) DBG |   
	I0930 11:11:17.059673   26946 main.go:141] libmachine: (ha-033260) DBG | </network>
	I0930 11:11:17.059679   26946 main.go:141] libmachine: (ha-033260) DBG | 
	I0930 11:11:17.064624   26946 main.go:141] libmachine: (ha-033260) DBG | trying to create private KVM network mk-ha-033260 192.168.39.0/24...
	I0930 11:11:17.128145   26946 main.go:141] libmachine: (ha-033260) DBG | private KVM network mk-ha-033260 192.168.39.0/24 created
	I0930 11:11:17.128172   26946 main.go:141] libmachine: (ha-033260) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 ...
	I0930 11:11:17.128183   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.128100   26970 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:17.128201   26946 main.go:141] libmachine: (ha-033260) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:11:17.128218   26946 main.go:141] libmachine: (ha-033260) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:11:17.365994   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.365804   26970 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa...
	I0930 11:11:17.493008   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.492862   26970 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/ha-033260.rawdisk...
	I0930 11:11:17.493034   26946 main.go:141] libmachine: (ha-033260) DBG | Writing magic tar header
	I0930 11:11:17.493046   26946 main.go:141] libmachine: (ha-033260) DBG | Writing SSH key tar header
	I0930 11:11:17.493053   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:17.492975   26970 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 ...
	I0930 11:11:17.493066   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260
	I0930 11:11:17.493124   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260 (perms=drwx------)
	I0930 11:11:17.493158   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:11:17.493173   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:11:17.493181   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:11:17.493193   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:11:17.493202   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:11:17.493226   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:11:17.493246   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:11:17.493258   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:11:17.493264   26946 main.go:141] libmachine: (ha-033260) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:11:17.493275   26946 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:11:17.493280   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:11:17.493286   26946 main.go:141] libmachine: (ha-033260) DBG | Checking permissions on dir: /home
	I0930 11:11:17.493291   26946 main.go:141] libmachine: (ha-033260) DBG | Skipping /home - not owner
	I0930 11:11:17.494319   26946 main.go:141] libmachine: (ha-033260) define libvirt domain using xml: 
	I0930 11:11:17.494340   26946 main.go:141] libmachine: (ha-033260) <domain type='kvm'>
	I0930 11:11:17.494347   26946 main.go:141] libmachine: (ha-033260)   <name>ha-033260</name>
	I0930 11:11:17.494351   26946 main.go:141] libmachine: (ha-033260)   <memory unit='MiB'>2200</memory>
	I0930 11:11:17.494356   26946 main.go:141] libmachine: (ha-033260)   <vcpu>2</vcpu>
	I0930 11:11:17.494359   26946 main.go:141] libmachine: (ha-033260)   <features>
	I0930 11:11:17.494365   26946 main.go:141] libmachine: (ha-033260)     <acpi/>
	I0930 11:11:17.494370   26946 main.go:141] libmachine: (ha-033260)     <apic/>
	I0930 11:11:17.494377   26946 main.go:141] libmachine: (ha-033260)     <pae/>
	I0930 11:11:17.494399   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494410   26946 main.go:141] libmachine: (ha-033260)   </features>
	I0930 11:11:17.494415   26946 main.go:141] libmachine: (ha-033260)   <cpu mode='host-passthrough'>
	I0930 11:11:17.494422   26946 main.go:141] libmachine: (ha-033260)   
	I0930 11:11:17.494425   26946 main.go:141] libmachine: (ha-033260)   </cpu>
	I0930 11:11:17.494429   26946 main.go:141] libmachine: (ha-033260)   <os>
	I0930 11:11:17.494433   26946 main.go:141] libmachine: (ha-033260)     <type>hvm</type>
	I0930 11:11:17.494461   26946 main.go:141] libmachine: (ha-033260)     <boot dev='cdrom'/>
	I0930 11:11:17.494487   26946 main.go:141] libmachine: (ha-033260)     <boot dev='hd'/>
	I0930 11:11:17.494498   26946 main.go:141] libmachine: (ha-033260)     <bootmenu enable='no'/>
	I0930 11:11:17.494504   26946 main.go:141] libmachine: (ha-033260)   </os>
	I0930 11:11:17.494511   26946 main.go:141] libmachine: (ha-033260)   <devices>
	I0930 11:11:17.494518   26946 main.go:141] libmachine: (ha-033260)     <disk type='file' device='cdrom'>
	I0930 11:11:17.494529   26946 main.go:141] libmachine: (ha-033260)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/boot2docker.iso'/>
	I0930 11:11:17.494540   26946 main.go:141] libmachine: (ha-033260)       <target dev='hdc' bus='scsi'/>
	I0930 11:11:17.494547   26946 main.go:141] libmachine: (ha-033260)       <readonly/>
	I0930 11:11:17.494558   26946 main.go:141] libmachine: (ha-033260)     </disk>
	I0930 11:11:17.494568   26946 main.go:141] libmachine: (ha-033260)     <disk type='file' device='disk'>
	I0930 11:11:17.494579   26946 main.go:141] libmachine: (ha-033260)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:11:17.494592   26946 main.go:141] libmachine: (ha-033260)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/ha-033260.rawdisk'/>
	I0930 11:11:17.494603   26946 main.go:141] libmachine: (ha-033260)       <target dev='hda' bus='virtio'/>
	I0930 11:11:17.494611   26946 main.go:141] libmachine: (ha-033260)     </disk>
	I0930 11:11:17.494625   26946 main.go:141] libmachine: (ha-033260)     <interface type='network'>
	I0930 11:11:17.494636   26946 main.go:141] libmachine: (ha-033260)       <source network='mk-ha-033260'/>
	I0930 11:11:17.494646   26946 main.go:141] libmachine: (ha-033260)       <model type='virtio'/>
	I0930 11:11:17.494655   26946 main.go:141] libmachine: (ha-033260)     </interface>
	I0930 11:11:17.494664   26946 main.go:141] libmachine: (ha-033260)     <interface type='network'>
	I0930 11:11:17.494672   26946 main.go:141] libmachine: (ha-033260)       <source network='default'/>
	I0930 11:11:17.494682   26946 main.go:141] libmachine: (ha-033260)       <model type='virtio'/>
	I0930 11:11:17.494731   26946 main.go:141] libmachine: (ha-033260)     </interface>
	I0930 11:11:17.494748   26946 main.go:141] libmachine: (ha-033260)     <serial type='pty'>
	I0930 11:11:17.494754   26946 main.go:141] libmachine: (ha-033260)       <target port='0'/>
	I0930 11:11:17.494763   26946 main.go:141] libmachine: (ha-033260)     </serial>
	I0930 11:11:17.494791   26946 main.go:141] libmachine: (ha-033260)     <console type='pty'>
	I0930 11:11:17.494813   26946 main.go:141] libmachine: (ha-033260)       <target type='serial' port='0'/>
	I0930 11:11:17.494833   26946 main.go:141] libmachine: (ha-033260)     </console>
	I0930 11:11:17.494851   26946 main.go:141] libmachine: (ha-033260)     <rng model='virtio'>
	I0930 11:11:17.494868   26946 main.go:141] libmachine: (ha-033260)       <backend model='random'>/dev/random</backend>
	I0930 11:11:17.494879   26946 main.go:141] libmachine: (ha-033260)     </rng>
	I0930 11:11:17.494884   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494894   26946 main.go:141] libmachine: (ha-033260)     
	I0930 11:11:17.494900   26946 main.go:141] libmachine: (ha-033260)   </devices>
	I0930 11:11:17.494910   26946 main.go:141] libmachine: (ha-033260) </domain>
	I0930 11:11:17.494919   26946 main.go:141] libmachine: (ha-033260) 
	I0930 11:11:17.499284   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:1e:fd:d9 in network default
	I0930 11:11:17.499904   26946 main.go:141] libmachine: (ha-033260) Ensuring networks are active...
	I0930 11:11:17.499920   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:17.500618   26946 main.go:141] libmachine: (ha-033260) Ensuring network default is active
	I0930 11:11:17.501042   26946 main.go:141] libmachine: (ha-033260) Ensuring network mk-ha-033260 is active
	I0930 11:11:17.501643   26946 main.go:141] libmachine: (ha-033260) Getting domain xml...
	I0930 11:11:17.502369   26946 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:11:18.692089   26946 main.go:141] libmachine: (ha-033260) Waiting to get IP...
	I0930 11:11:18.692860   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:18.693297   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:18.693313   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:18.693260   26970 retry.go:31] will retry after 231.51107ms: waiting for machine to come up
	I0930 11:11:18.926878   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:18.927339   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:18.927367   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:18.927281   26970 retry.go:31] will retry after 238.29389ms: waiting for machine to come up
	I0930 11:11:19.167097   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.167813   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.167841   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.167759   26970 retry.go:31] will retry after 304.46036ms: waiting for machine to come up
	I0930 11:11:19.474179   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.474648   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.474678   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.474604   26970 retry.go:31] will retry after 472.499674ms: waiting for machine to come up
	I0930 11:11:19.948108   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:19.948622   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:19.948649   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:19.948597   26970 retry.go:31] will retry after 645.07677ms: waiting for machine to come up
	I0930 11:11:20.595504   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:20.595963   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:20.595984   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:20.595941   26970 retry.go:31] will retry after 894.966176ms: waiting for machine to come up
	I0930 11:11:21.492428   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:21.492831   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:21.492882   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:21.492814   26970 retry.go:31] will retry after 848.859093ms: waiting for machine to come up
	I0930 11:11:22.343403   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:22.343835   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:22.343861   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:22.343753   26970 retry.go:31] will retry after 1.05973931s: waiting for machine to come up
	I0930 11:11:23.404961   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:23.405359   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:23.405385   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:23.405316   26970 retry.go:31] will retry after 1.638432323s: waiting for machine to come up
	I0930 11:11:25.046055   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:25.046452   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:25.046477   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:25.046405   26970 retry.go:31] will retry after 2.080958051s: waiting for machine to come up
	I0930 11:11:27.128708   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:27.129133   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:27.129156   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:27.129053   26970 retry.go:31] will retry after 2.256414995s: waiting for machine to come up
	I0930 11:11:29.387356   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:29.387768   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:29.387788   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:29.387745   26970 retry.go:31] will retry after 3.372456281s: waiting for machine to come up
	I0930 11:11:32.761875   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:32.762235   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:32.762254   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:32.762202   26970 retry.go:31] will retry after 3.757571385s: waiting for machine to come up
	I0930 11:11:36.524130   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:36.524597   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:11:36.524613   26946 main.go:141] libmachine: (ha-033260) DBG | I0930 11:11:36.524548   26970 retry.go:31] will retry after 4.081097536s: waiting for machine to come up
	I0930 11:11:40.609929   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.610386   26946 main.go:141] libmachine: (ha-033260) Found IP for machine: 192.168.39.249
	I0930 11:11:40.610415   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has current primary IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.610423   26946 main.go:141] libmachine: (ha-033260) Reserving static IP address...
	I0930 11:11:40.610796   26946 main.go:141] libmachine: (ha-033260) DBG | unable to find host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"} in network mk-ha-033260
	I0930 11:11:40.682058   26946 main.go:141] libmachine: (ha-033260) DBG | Getting to WaitForSSH function...
	I0930 11:11:40.682112   26946 main.go:141] libmachine: (ha-033260) Reserved static IP address: 192.168.39.249
	I0930 11:11:40.682151   26946 main.go:141] libmachine: (ha-033260) Waiting for SSH to be available...
	I0930 11:11:40.684625   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.684964   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.684990   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.685088   26946 main.go:141] libmachine: (ha-033260) DBG | Using SSH client type: external
	I0930 11:11:40.685108   26946 main.go:141] libmachine: (ha-033260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa (-rw-------)
	I0930 11:11:40.685155   26946 main.go:141] libmachine: (ha-033260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:11:40.685168   26946 main.go:141] libmachine: (ha-033260) DBG | About to run SSH command:
	I0930 11:11:40.685196   26946 main.go:141] libmachine: (ha-033260) DBG | exit 0
	I0930 11:11:40.813832   26946 main.go:141] libmachine: (ha-033260) DBG | SSH cmd err, output: <nil>: 
	I0930 11:11:40.814089   26946 main.go:141] libmachine: (ha-033260) KVM machine creation complete!
	I0930 11:11:40.814483   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:40.815001   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:40.815218   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:40.815362   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:11:40.815373   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:11:40.816691   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:11:40.816703   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:11:40.816707   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:11:40.816712   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:40.818838   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.819210   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.819240   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.819306   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:40.819465   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.819601   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.819739   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:40.819883   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:40.820061   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:40.820071   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:11:40.929008   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:11:40.929033   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:11:40.929040   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:40.931913   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.932264   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:40.932308   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:40.932448   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:40.932679   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.932816   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:40.932931   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:40.933122   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:40.933283   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:40.933295   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:11:41.042597   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:11:41.042675   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:11:41.042682   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:11:41.042689   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.042906   26946 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:11:41.042918   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.043088   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.045281   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.045591   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.045634   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.045749   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.045916   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.046048   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.046166   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.046324   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.046537   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.046554   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:11:41.173460   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:11:41.173489   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.176142   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.176483   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.176513   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.176659   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.176845   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.176984   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.177110   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.177285   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.177443   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.177458   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:11:41.295471   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:11:41.295501   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:11:41.295523   26946 buildroot.go:174] setting up certificates
	I0930 11:11:41.295535   26946 provision.go:84] configureAuth start
	I0930 11:11:41.295560   26946 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:11:41.295824   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:41.298508   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.298844   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.298871   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.299011   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.301187   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.301504   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.301529   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.301674   26946 provision.go:143] copyHostCerts
	I0930 11:11:41.301701   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:11:41.301735   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:11:41.301744   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:11:41.301807   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:11:41.301895   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:11:41.301913   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:11:41.301919   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:11:41.301944   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:11:41.301997   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:11:41.302013   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:11:41.302019   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:11:41.302039   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:11:41.302094   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:11:41.595618   26946 provision.go:177] copyRemoteCerts
	I0930 11:11:41.595675   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:11:41.595700   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.598644   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.599092   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.599122   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.599308   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.599628   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.599809   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.599990   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:41.686253   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:11:41.686348   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:11:41.716396   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:11:41.716470   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:11:41.741350   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:11:41.741426   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:11:41.765879   26946 provision.go:87] duration metric: took 470.33102ms to configureAuth
	I0930 11:11:41.765904   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:11:41.766073   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:11:41.766153   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:41.768846   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.769139   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:41.769163   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:41.769350   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:41.769573   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.769751   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:41.769867   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:41.770004   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:41.770154   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:41.770171   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:11:41.997580   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:11:41.997603   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:11:41.997612   26946 main.go:141] libmachine: (ha-033260) Calling .GetURL
	I0930 11:11:41.998809   26946 main.go:141] libmachine: (ha-033260) DBG | Using libvirt version 6000000
	I0930 11:11:42.000992   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.001367   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.001403   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.001552   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:11:42.001574   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:11:42.001580   26946 client.go:171] duration metric: took 24.944976164s to LocalClient.Create
	I0930 11:11:42.001599   26946 start.go:167] duration metric: took 24.945029476s to libmachine.API.Create "ha-033260"
	I0930 11:11:42.001605   26946 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:11:42.001634   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:11:42.001658   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.001903   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:11:42.001928   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.004137   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.004477   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.004506   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.004626   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.004785   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.004929   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.005073   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.088764   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:11:42.093605   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:11:42.093649   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:11:42.093718   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:11:42.093798   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:11:42.093808   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:11:42.093909   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:11:42.104383   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:11:42.133090   26946 start.go:296] duration metric: took 131.471881ms for postStartSetup
	I0930 11:11:42.133135   26946 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:11:42.133732   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:42.136141   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.136473   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.136492   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.136788   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:11:42.136956   26946 start.go:128] duration metric: took 25.09848122s to createHost
	I0930 11:11:42.136975   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.139440   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.139825   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.139853   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.139989   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.140175   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.140334   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.140446   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.140582   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:11:42.140793   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:11:42.140810   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:11:42.250567   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694702.228135172
	
	I0930 11:11:42.250590   26946 fix.go:216] guest clock: 1727694702.228135172
	I0930 11:11:42.250600   26946 fix.go:229] Guest: 2024-09-30 11:11:42.228135172 +0000 UTC Remote: 2024-09-30 11:11:42.136966335 +0000 UTC m=+25.202018114 (delta=91.168837ms)
	I0930 11:11:42.250654   26946 fix.go:200] guest clock delta is within tolerance: 91.168837ms
	I0930 11:11:42.250662   26946 start.go:83] releasing machines lock for "ha-033260", held for 25.21225918s
	I0930 11:11:42.250689   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.250959   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:42.253937   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.254263   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.254291   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.254395   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.254873   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.255071   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:11:42.255171   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:11:42.255230   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.255277   26946 ssh_runner.go:195] Run: cat /version.json
	I0930 11:11:42.255305   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:11:42.257775   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258072   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258098   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.258117   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258247   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.258399   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.258499   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:42.258530   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:42.258550   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.258636   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:11:42.258725   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.258782   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:11:42.258905   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:11:42.259023   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:11:42.338949   26946 ssh_runner.go:195] Run: systemctl --version
	I0930 11:11:42.367977   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:11:42.529658   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:11:42.535739   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:11:42.535805   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:11:42.553004   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:11:42.553029   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:11:42.553101   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:11:42.571333   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:11:42.586474   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:11:42.586529   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:11:42.600562   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:11:42.614592   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:11:42.724714   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:11:42.863957   26946 docker.go:233] disabling docker service ...
	I0930 11:11:42.864016   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:11:42.878829   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:11:42.892519   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:11:43.031759   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:11:43.156228   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:11:43.171439   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:11:43.190694   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:11:43.190806   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.201572   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:11:43.201660   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.212771   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.224198   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.235643   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:11:43.247521   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.258652   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.276825   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:11:43.288336   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:11:43.299367   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:11:43.299422   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:11:43.314057   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:11:43.324403   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:11:43.446606   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:11:43.543986   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:11:43.544064   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:11:43.548794   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:11:43.548857   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:11:43.552827   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:11:43.593000   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:11:43.593096   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:11:43.624593   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:11:43.654845   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:11:43.656217   26946 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:11:43.658636   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:43.658956   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:11:43.658982   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:11:43.659236   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:11:43.663528   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:11:43.677810   26946 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:11:43.677905   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:11:43.677950   26946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:11:43.712140   26946 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 11:11:43.712231   26946 ssh_runner.go:195] Run: which lz4
	I0930 11:11:43.716210   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 11:11:43.716286   26946 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:11:43.720372   26946 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:11:43.720397   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 11:11:45.117936   26946 crio.go:462] duration metric: took 1.401668541s to copy over tarball
	I0930 11:11:45.118009   26946 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:11:47.123971   26946 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.00593624s)
	I0930 11:11:47.124002   26946 crio.go:469] duration metric: took 2.006037646s to extract the tarball
	I0930 11:11:47.124011   26946 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:11:47.161484   26946 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:11:47.208444   26946 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:11:47.208468   26946 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:11:47.208475   26946 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:11:47.208561   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:11:47.208632   26946 ssh_runner.go:195] Run: crio config
	I0930 11:11:47.256652   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:11:47.256671   26946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 11:11:47.256679   26946 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:11:47.256700   26946 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:11:47.256808   26946 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:11:47.256829   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:11:47.256866   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:11:47.273274   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:11:47.273411   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:11:47.273489   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:11:47.284468   26946 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:11:47.284546   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:11:47.295086   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:11:47.313062   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:11:47.330490   26946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:11:47.348148   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0930 11:11:47.364645   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:11:47.368788   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:11:47.381517   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:11:47.516902   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:11:47.535500   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:11:47.535531   26946 certs.go:194] generating shared ca certs ...
	I0930 11:11:47.535554   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.535745   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:11:47.535819   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:11:47.535836   26946 certs.go:256] generating profile certs ...
	I0930 11:11:47.535916   26946 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:11:47.535947   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt with IP's: []
	I0930 11:11:47.718587   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt ...
	I0930 11:11:47.718617   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt: {Name:mkef0c2b538ff6ec90e4096f6b30d2cc62a0498b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.718785   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key ...
	I0930 11:11:47.718795   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key: {Name:mk0bf4d552829907727733b9f23a1e78046426c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.718864   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf
	I0930 11:11:47.718878   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.254]
	I0930 11:11:47.993565   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf ...
	I0930 11:11:47.993602   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf: {Name:mk8d827ffc338aba548bc3df464e9e04ae838b07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.993789   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf ...
	I0930 11:11:47.993807   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf: {Name:mka275015927a8ca9f533558d637ec2560f5b41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:47.993887   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.c00eb5cf -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:11:47.993965   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.c00eb5cf -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:11:47.994041   26946 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:11:47.994059   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt with IP's: []
	I0930 11:11:48.098988   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt ...
	I0930 11:11:48.099020   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt: {Name:mk7106fd4af523e8a328dae6580fd1ecc34c18b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:48.099178   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key ...
	I0930 11:11:48.099189   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key: {Name:mka3dbe7128ec5d469ec7906155af8e6e7cc2725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:11:48.099265   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:11:48.099283   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:11:48.099294   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:11:48.099304   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:11:48.099314   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:11:48.099324   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:11:48.099333   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:11:48.099342   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:11:48.099385   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:11:48.099425   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:11:48.099434   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:11:48.099457   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:11:48.099481   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:11:48.099502   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:11:48.099537   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:11:48.099561   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.099574   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.099592   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.100091   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:11:48.126879   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:11:48.153722   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:11:48.179797   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:11:48.205074   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 11:11:48.230272   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:11:48.255030   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:11:48.279850   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:11:48.306723   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:11:48.332995   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:11:48.363646   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:11:48.392223   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:11:48.410336   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:11:48.416506   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:11:48.428642   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.433601   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.433673   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:11:48.439817   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:11:48.451918   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:11:48.464282   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.469211   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.469276   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:11:48.475319   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:11:48.487558   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:11:48.500151   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.505278   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.505355   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:11:48.511924   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:11:48.525201   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:11:48.529960   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:11:48.530014   26946 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:11:48.530081   26946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:11:48.530129   26946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:11:48.568913   26946 cri.go:89] found id: ""
	I0930 11:11:48.568975   26946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:11:48.580292   26946 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 11:11:48.593494   26946 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 11:11:48.606006   26946 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 11:11:48.606037   26946 kubeadm.go:157] found existing configuration files:
	
	I0930 11:11:48.606079   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 11:11:48.615784   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 11:11:48.615855   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 11:11:48.626018   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 11:11:48.635953   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 11:11:48.636032   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 11:11:48.646292   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 11:11:48.657605   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 11:11:48.657679   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 11:11:48.669154   26946 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 11:11:48.680279   26946 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 11:11:48.680348   26946 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 11:11:48.691798   26946 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 11:11:48.797903   26946 kubeadm.go:310] W0930 11:11:48.782166     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 11:11:48.798931   26946 kubeadm.go:310] W0930 11:11:48.783291     830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 11:11:48.907657   26946 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 11:12:00.116285   26946 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 11:12:00.116363   26946 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 11:12:00.116459   26946 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 11:12:00.116597   26946 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 11:12:00.116728   26946 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 11:12:00.116817   26946 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 11:12:00.118253   26946 out.go:235]   - Generating certificates and keys ...
	I0930 11:12:00.118344   26946 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 11:12:00.118441   26946 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 11:12:00.118536   26946 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 11:12:00.118621   26946 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 11:12:00.118710   26946 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 11:12:00.118780   26946 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 11:12:00.118849   26946 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 11:12:00.118971   26946 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-033260 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0930 11:12:00.119022   26946 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 11:12:00.119113   26946 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-033260 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0930 11:12:00.119209   26946 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 11:12:00.119261   26946 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 11:12:00.119300   26946 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 11:12:00.119361   26946 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 11:12:00.119418   26946 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 11:12:00.119463   26946 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 11:12:00.119517   26946 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 11:12:00.119604   26946 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 11:12:00.119657   26946 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 11:12:00.119721   26946 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 11:12:00.119813   26946 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 11:12:00.121972   26946 out.go:235]   - Booting up control plane ...
	I0930 11:12:00.122077   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 11:12:00.122168   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 11:12:00.122257   26946 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 11:12:00.122354   26946 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 11:12:00.122445   26946 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 11:12:00.122493   26946 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 11:12:00.122632   26946 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 11:12:00.122746   26946 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 11:12:00.122807   26946 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002277963s
	I0930 11:12:00.122866   26946 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 11:12:00.122914   26946 kubeadm.go:310] [api-check] The API server is healthy after 5.817139259s
	I0930 11:12:00.123017   26946 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 11:12:00.123126   26946 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 11:12:00.123189   26946 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 11:12:00.123373   26946 kubeadm.go:310] [mark-control-plane] Marking the node ha-033260 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 11:12:00.123455   26946 kubeadm.go:310] [bootstrap-token] Using token: mglnbr.4ysxjyfx6ulvufry
	I0930 11:12:00.124695   26946 out.go:235]   - Configuring RBAC rules ...
	I0930 11:12:00.124816   26946 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 11:12:00.124888   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 11:12:00.125008   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 11:12:00.125123   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 11:12:00.125226   26946 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 11:12:00.125300   26946 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 11:12:00.125399   26946 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 11:12:00.125438   26946 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 11:12:00.125482   26946 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 11:12:00.125488   26946 kubeadm.go:310] 
	I0930 11:12:00.125543   26946 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 11:12:00.125548   26946 kubeadm.go:310] 
	I0930 11:12:00.125627   26946 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 11:12:00.125640   26946 kubeadm.go:310] 
	I0930 11:12:00.125667   26946 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 11:12:00.125722   26946 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 11:12:00.125765   26946 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 11:12:00.125771   26946 kubeadm.go:310] 
	I0930 11:12:00.125822   26946 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 11:12:00.125832   26946 kubeadm.go:310] 
	I0930 11:12:00.125875   26946 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 11:12:00.125882   26946 kubeadm.go:310] 
	I0930 11:12:00.125945   26946 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 11:12:00.126010   26946 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 11:12:00.126068   26946 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 11:12:00.126073   26946 kubeadm.go:310] 
	I0930 11:12:00.126141   26946 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 11:12:00.126212   26946 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 11:12:00.126219   26946 kubeadm.go:310] 
	I0930 11:12:00.126299   26946 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mglnbr.4ysxjyfx6ulvufry \
	I0930 11:12:00.126384   26946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 \
	I0930 11:12:00.126404   26946 kubeadm.go:310] 	--control-plane 
	I0930 11:12:00.126410   26946 kubeadm.go:310] 
	I0930 11:12:00.126493   26946 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 11:12:00.126501   26946 kubeadm.go:310] 
	I0930 11:12:00.126563   26946 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mglnbr.4ysxjyfx6ulvufry \
	I0930 11:12:00.126653   26946 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 
	I0930 11:12:00.126666   26946 cni.go:84] Creating CNI manager for ""
	I0930 11:12:00.126671   26946 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0930 11:12:00.128070   26946 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0930 11:12:00.129234   26946 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0930 11:12:00.134944   26946 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0930 11:12:00.134960   26946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0930 11:12:00.155333   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0930 11:12:00.530346   26946 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 11:12:00.530478   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260 minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=true
	I0930 11:12:00.530486   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:00.762071   26946 ops.go:34] apiserver oom_adj: -16
	I0930 11:12:00.762161   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:01.262836   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:01.762341   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:02.262939   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:02.762594   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.263292   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.762877   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 11:12:03.861166   26946 kubeadm.go:1113] duration metric: took 3.330735229s to wait for elevateKubeSystemPrivileges
	I0930 11:12:03.861207   26946 kubeadm.go:394] duration metric: took 15.331194175s to StartCluster
	I0930 11:12:03.861229   26946 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:03.861306   26946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:03.861899   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:03.862096   26946 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:03.862109   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 11:12:03.862128   26946 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:12:03.862180   26946 addons.go:69] Setting storage-provisioner=true in profile "ha-033260"
	I0930 11:12:03.862192   26946 addons.go:234] Setting addon storage-provisioner=true in "ha-033260"
	I0930 11:12:03.862117   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:12:03.862217   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:03.862220   26946 addons.go:69] Setting default-storageclass=true in profile "ha-033260"
	I0930 11:12:03.862242   26946 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-033260"
	I0930 11:12:03.862318   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:03.862546   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.862579   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.862640   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.862674   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.878311   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44373
	I0930 11:12:03.878524   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38709
	I0930 11:12:03.878793   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.878956   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.879296   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.879311   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.879437   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.879458   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.879666   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.879878   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.880063   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.880274   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.880317   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.882311   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:03.882615   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:12:03.883117   26946 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 11:12:03.883340   26946 addons.go:234] Setting addon default-storageclass=true in "ha-033260"
	I0930 11:12:03.883377   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:03.883734   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.883774   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.895612   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0930 11:12:03.896182   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.896686   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.896706   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.897041   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.897263   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.899125   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:03.899133   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42283
	I0930 11:12:03.899601   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.900021   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.900036   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.900378   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.901008   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:03.901054   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:03.901205   26946 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:12:03.902407   26946 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:03.902428   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 11:12:03.902445   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:03.905497   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.906023   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:03.906045   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.906199   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:03.906396   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:03.906554   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:03.906702   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:03.917103   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0930 11:12:03.917557   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:03.918124   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:03.918149   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:03.918507   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:03.918675   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:03.920302   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:03.920506   26946 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:03.920522   26946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 11:12:03.920544   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:03.923151   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.923529   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:03.923552   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:03.923700   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:03.923867   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:03.923995   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:03.924108   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:03.981471   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 11:12:04.090970   26946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:12:04.120632   26946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:12:04.535542   26946 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0930 11:12:04.535597   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.535614   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.535906   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.535923   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.535937   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.535938   26946 main.go:141] libmachine: (ha-033260) DBG | Closing plugin on server side
	I0930 11:12:04.535945   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.536174   26946 main.go:141] libmachine: (ha-033260) DBG | Closing plugin on server side
	I0930 11:12:04.536192   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.536203   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.536265   26946 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:12:04.536288   26946 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:12:04.536378   26946 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0930 11:12:04.536387   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:04.536394   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:04.536397   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:04.616635   26946 round_trippers.go:574] Response Status: 200 OK in 80 milliseconds
	I0930 11:12:04.617143   26946 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0930 11:12:04.617157   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:04.617164   26946 round_trippers.go:473]     Content-Type: application/json
	I0930 11:12:04.617168   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:04.617171   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:04.644304   26946 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0930 11:12:04.644577   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.644596   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.644880   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.644899   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.839773   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.839805   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.840111   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.840131   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.840140   26946 main.go:141] libmachine: Making call to close driver server
	I0930 11:12:04.840149   26946 main.go:141] libmachine: (ha-033260) Calling .Close
	I0930 11:12:04.840370   26946 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:12:04.840384   26946 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:12:04.841979   26946 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0930 11:12:04.843256   26946 addons.go:510] duration metric: took 981.127437ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0930 11:12:04.843295   26946 start.go:246] waiting for cluster config update ...
	I0930 11:12:04.843309   26946 start.go:255] writing updated cluster config ...
	I0930 11:12:04.844944   26946 out.go:201] 
	I0930 11:12:04.846458   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:04.846524   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:04.848060   26946 out.go:177] * Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	I0930 11:12:04.849158   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:12:04.849179   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:12:04.849280   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:12:04.849291   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:12:04.849355   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:04.849507   26946 start.go:360] acquireMachinesLock for ha-033260-m02: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:12:04.849551   26946 start.go:364] duration metric: took 26.46µs to acquireMachinesLock for "ha-033260-m02"
	I0930 11:12:04.849567   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:04.849642   26946 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0930 11:12:04.851226   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:12:04.851326   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:04.851360   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:04.866966   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0930 11:12:04.867433   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:04.867975   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:04.867995   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:04.868336   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:04.868557   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:04.868710   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:04.868858   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:12:04.868889   26946 client.go:168] LocalClient.Create starting
	I0930 11:12:04.868923   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:12:04.868957   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:12:04.868973   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:12:04.869023   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:12:04.869042   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:12:04.869052   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:12:04.869078   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:12:04.869093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .PreCreateCheck
	I0930 11:12:04.869253   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:04.869711   26946 main.go:141] libmachine: Creating machine...
	I0930 11:12:04.869724   26946 main.go:141] libmachine: (ha-033260-m02) Calling .Create
	I0930 11:12:04.869845   26946 main.go:141] libmachine: (ha-033260-m02) Creating KVM machine...
	I0930 11:12:04.871091   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found existing default KVM network
	I0930 11:12:04.871157   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found existing private KVM network mk-ha-033260
	I0930 11:12:04.871294   26946 main.go:141] libmachine: (ha-033260-m02) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 ...
	I0930 11:12:04.871318   26946 main.go:141] libmachine: (ha-033260-m02) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:12:04.871364   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:04.871284   27323 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:12:04.871439   26946 main.go:141] libmachine: (ha-033260-m02) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:12:05.099309   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.099139   27323 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa...
	I0930 11:12:05.396113   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.395976   27323 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/ha-033260-m02.rawdisk...
	I0930 11:12:05.396137   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Writing magic tar header
	I0930 11:12:05.396150   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Writing SSH key tar header
	I0930 11:12:05.396161   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:05.396084   27323 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 ...
	I0930 11:12:05.396175   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02
	I0930 11:12:05.396200   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02 (perms=drwx------)
	I0930 11:12:05.396209   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:12:05.396245   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:12:05.396258   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:12:05.396269   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:12:05.396285   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:12:05.396302   26946 main.go:141] libmachine: (ha-033260-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:12:05.396315   26946 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:12:05.396331   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:12:05.396348   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:12:05.396365   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:12:05.396376   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:12:05.396390   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Checking permissions on dir: /home
	I0930 11:12:05.396400   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Skipping /home - not owner
	I0930 11:12:05.397208   26946 main.go:141] libmachine: (ha-033260-m02) define libvirt domain using xml: 
	I0930 11:12:05.397237   26946 main.go:141] libmachine: (ha-033260-m02) <domain type='kvm'>
	I0930 11:12:05.397248   26946 main.go:141] libmachine: (ha-033260-m02)   <name>ha-033260-m02</name>
	I0930 11:12:05.397259   26946 main.go:141] libmachine: (ha-033260-m02)   <memory unit='MiB'>2200</memory>
	I0930 11:12:05.397267   26946 main.go:141] libmachine: (ha-033260-m02)   <vcpu>2</vcpu>
	I0930 11:12:05.397273   26946 main.go:141] libmachine: (ha-033260-m02)   <features>
	I0930 11:12:05.397282   26946 main.go:141] libmachine: (ha-033260-m02)     <acpi/>
	I0930 11:12:05.397289   26946 main.go:141] libmachine: (ha-033260-m02)     <apic/>
	I0930 11:12:05.397297   26946 main.go:141] libmachine: (ha-033260-m02)     <pae/>
	I0930 11:12:05.397306   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397314   26946 main.go:141] libmachine: (ha-033260-m02)   </features>
	I0930 11:12:05.397321   26946 main.go:141] libmachine: (ha-033260-m02)   <cpu mode='host-passthrough'>
	I0930 11:12:05.397329   26946 main.go:141] libmachine: (ha-033260-m02)   
	I0930 11:12:05.397335   26946 main.go:141] libmachine: (ha-033260-m02)   </cpu>
	I0930 11:12:05.397359   26946 main.go:141] libmachine: (ha-033260-m02)   <os>
	I0930 11:12:05.397379   26946 main.go:141] libmachine: (ha-033260-m02)     <type>hvm</type>
	I0930 11:12:05.397384   26946 main.go:141] libmachine: (ha-033260-m02)     <boot dev='cdrom'/>
	I0930 11:12:05.397391   26946 main.go:141] libmachine: (ha-033260-m02)     <boot dev='hd'/>
	I0930 11:12:05.397407   26946 main.go:141] libmachine: (ha-033260-m02)     <bootmenu enable='no'/>
	I0930 11:12:05.397419   26946 main.go:141] libmachine: (ha-033260-m02)   </os>
	I0930 11:12:05.397427   26946 main.go:141] libmachine: (ha-033260-m02)   <devices>
	I0930 11:12:05.397438   26946 main.go:141] libmachine: (ha-033260-m02)     <disk type='file' device='cdrom'>
	I0930 11:12:05.397450   26946 main.go:141] libmachine: (ha-033260-m02)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/boot2docker.iso'/>
	I0930 11:12:05.397461   26946 main.go:141] libmachine: (ha-033260-m02)       <target dev='hdc' bus='scsi'/>
	I0930 11:12:05.397468   26946 main.go:141] libmachine: (ha-033260-m02)       <readonly/>
	I0930 11:12:05.397480   26946 main.go:141] libmachine: (ha-033260-m02)     </disk>
	I0930 11:12:05.397492   26946 main.go:141] libmachine: (ha-033260-m02)     <disk type='file' device='disk'>
	I0930 11:12:05.397501   26946 main.go:141] libmachine: (ha-033260-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:12:05.397518   26946 main.go:141] libmachine: (ha-033260-m02)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/ha-033260-m02.rawdisk'/>
	I0930 11:12:05.397528   26946 main.go:141] libmachine: (ha-033260-m02)       <target dev='hda' bus='virtio'/>
	I0930 11:12:05.397538   26946 main.go:141] libmachine: (ha-033260-m02)     </disk>
	I0930 11:12:05.397548   26946 main.go:141] libmachine: (ha-033260-m02)     <interface type='network'>
	I0930 11:12:05.397565   26946 main.go:141] libmachine: (ha-033260-m02)       <source network='mk-ha-033260'/>
	I0930 11:12:05.397579   26946 main.go:141] libmachine: (ha-033260-m02)       <model type='virtio'/>
	I0930 11:12:05.397590   26946 main.go:141] libmachine: (ha-033260-m02)     </interface>
	I0930 11:12:05.397605   26946 main.go:141] libmachine: (ha-033260-m02)     <interface type='network'>
	I0930 11:12:05.397627   26946 main.go:141] libmachine: (ha-033260-m02)       <source network='default'/>
	I0930 11:12:05.397641   26946 main.go:141] libmachine: (ha-033260-m02)       <model type='virtio'/>
	I0930 11:12:05.397651   26946 main.go:141] libmachine: (ha-033260-m02)     </interface>
	I0930 11:12:05.397663   26946 main.go:141] libmachine: (ha-033260-m02)     <serial type='pty'>
	I0930 11:12:05.397672   26946 main.go:141] libmachine: (ha-033260-m02)       <target port='0'/>
	I0930 11:12:05.397683   26946 main.go:141] libmachine: (ha-033260-m02)     </serial>
	I0930 11:12:05.397693   26946 main.go:141] libmachine: (ha-033260-m02)     <console type='pty'>
	I0930 11:12:05.397702   26946 main.go:141] libmachine: (ha-033260-m02)       <target type='serial' port='0'/>
	I0930 11:12:05.397716   26946 main.go:141] libmachine: (ha-033260-m02)     </console>
	I0930 11:12:05.397728   26946 main.go:141] libmachine: (ha-033260-m02)     <rng model='virtio'>
	I0930 11:12:05.397739   26946 main.go:141] libmachine: (ha-033260-m02)       <backend model='random'>/dev/random</backend>
	I0930 11:12:05.397750   26946 main.go:141] libmachine: (ha-033260-m02)     </rng>
	I0930 11:12:05.397758   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397766   26946 main.go:141] libmachine: (ha-033260-m02)     
	I0930 11:12:05.397771   26946 main.go:141] libmachine: (ha-033260-m02)   </devices>
	I0930 11:12:05.397781   26946 main.go:141] libmachine: (ha-033260-m02) </domain>
	I0930 11:12:05.397794   26946 main.go:141] libmachine: (ha-033260-m02) 
	I0930 11:12:05.404924   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:91:42:82 in network default
	I0930 11:12:05.405500   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:05.405515   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring networks are active...
	I0930 11:12:05.406422   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring network default is active
	I0930 11:12:05.406717   26946 main.go:141] libmachine: (ha-033260-m02) Ensuring network mk-ha-033260 is active
	I0930 11:12:05.407099   26946 main.go:141] libmachine: (ha-033260-m02) Getting domain xml...
	I0930 11:12:05.407766   26946 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:12:06.665629   26946 main.go:141] libmachine: (ha-033260-m02) Waiting to get IP...
	I0930 11:12:06.666463   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:06.666923   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:06.666983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:06.666914   27323 retry.go:31] will retry after 236.292128ms: waiting for machine to come up
	I0930 11:12:06.904458   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:06.904973   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:06.905008   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:06.904946   27323 retry.go:31] will retry after 373.72215ms: waiting for machine to come up
	I0930 11:12:07.280653   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:07.281148   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:07.281167   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:07.281127   27323 retry.go:31] will retry after 417.615707ms: waiting for machine to come up
	I0930 11:12:07.700723   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:07.701173   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:07.701199   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:07.701130   27323 retry.go:31] will retry after 495.480397ms: waiting for machine to come up
	I0930 11:12:08.198698   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:08.199207   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:08.199236   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:08.199183   27323 retry.go:31] will retry after 541.395524ms: waiting for machine to come up
	I0930 11:12:08.742190   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:08.742786   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:08.742812   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:08.742737   27323 retry.go:31] will retry after 711.22134ms: waiting for machine to come up
	I0930 11:12:09.455685   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:09.456147   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:09.456172   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:09.456119   27323 retry.go:31] will retry after 1.042420332s: waiting for machine to come up
	I0930 11:12:10.499804   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:10.500316   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:10.500353   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:10.500299   27323 retry.go:31] will retry after 1.048379902s: waiting for machine to come up
	I0930 11:12:11.550177   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:11.550587   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:11.550616   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:11.550525   27323 retry.go:31] will retry after 1.84570983s: waiting for machine to come up
	I0930 11:12:13.397532   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:13.398027   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:13.398052   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:13.397980   27323 retry.go:31] will retry after 1.566549945s: waiting for machine to come up
	I0930 11:12:14.966467   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:14.966938   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:14.966983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:14.966914   27323 retry.go:31] will retry after 1.814424901s: waiting for machine to come up
	I0930 11:12:16.783827   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:16.784216   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:16.784247   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:16.784177   27323 retry.go:31] will retry after 3.594354669s: waiting for machine to come up
	I0930 11:12:20.380537   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:20.380935   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:20.380960   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:20.380904   27323 retry.go:31] will retry after 3.199139157s: waiting for machine to come up
	I0930 11:12:23.582795   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:23.583206   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:12:23.583227   26946 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:12:23.583181   27323 retry.go:31] will retry after 5.054668279s: waiting for machine to come up
	I0930 11:12:28.639867   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.640504   26946 main.go:141] libmachine: (ha-033260-m02) Found IP for machine: 192.168.39.3
	I0930 11:12:28.640526   26946 main.go:141] libmachine: (ha-033260-m02) Reserving static IP address...
	I0930 11:12:28.640539   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.641001   26946 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"} in network mk-ha-033260
	I0930 11:12:28.722236   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Getting to WaitForSSH function...
	I0930 11:12:28.722267   26946 main.go:141] libmachine: (ha-033260-m02) Reserved static IP address: 192.168.39.3
	I0930 11:12:28.722280   26946 main.go:141] libmachine: (ha-033260-m02) Waiting for SSH to be available...
	I0930 11:12:28.724853   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.725241   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.725265   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.725515   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH client type: external
	I0930 11:12:28.725540   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa (-rw-------)
	I0930 11:12:28.725576   26946 main.go:141] libmachine: (ha-033260-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:12:28.725598   26946 main.go:141] libmachine: (ha-033260-m02) DBG | About to run SSH command:
	I0930 11:12:28.725610   26946 main.go:141] libmachine: (ha-033260-m02) DBG | exit 0
	I0930 11:12:28.854399   26946 main.go:141] libmachine: (ha-033260-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 11:12:28.854625   26946 main.go:141] libmachine: (ha-033260-m02) KVM machine creation complete!
	I0930 11:12:28.855272   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:28.855866   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:28.856047   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:28.856170   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:12:28.856182   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:12:28.857578   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:12:28.857593   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:12:28.857600   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:12:28.857606   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:28.859889   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.860246   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.860279   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.860438   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:28.860622   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.860773   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.860913   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:28.861114   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:28.861325   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:28.861337   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:12:28.973157   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:12:28.973184   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:12:28.973195   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:28.976106   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.976500   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:28.976531   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:28.976798   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:28.977021   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.977185   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:28.977339   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:28.977493   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:28.977714   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:28.977727   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:12:29.086855   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:12:29.086927   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:12:29.086937   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:12:29.086951   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.087245   26946 buildroot.go:166] provisioning hostname "ha-033260-m02"
	I0930 11:12:29.087269   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.087463   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.090156   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.090525   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.090551   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.090676   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.090846   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.090986   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.091115   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.091289   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.091467   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.091479   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m02 && echo "ha-033260-m02" | sudo tee /etc/hostname
	I0930 11:12:29.220174   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m02
	
	I0930 11:12:29.220204   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.223091   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.223537   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.223567   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.223724   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.223905   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.224048   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.224217   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.224385   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.224590   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.224614   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:12:29.343733   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:12:29.343767   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:12:29.343787   26946 buildroot.go:174] setting up certificates
	I0930 11:12:29.343798   26946 provision.go:84] configureAuth start
	I0930 11:12:29.343811   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:12:29.344093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:29.346631   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.346930   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.346956   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.347096   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.349248   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.349664   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.349689   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.349858   26946 provision.go:143] copyHostCerts
	I0930 11:12:29.349889   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:12:29.349936   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:12:29.349948   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:12:29.350055   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:12:29.350156   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:12:29.350176   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:12:29.350181   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:12:29.350207   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:12:29.350254   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:12:29.350271   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:12:29.350277   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:12:29.350298   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:12:29.350347   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m02 san=[127.0.0.1 192.168.39.3 ha-033260-m02 localhost minikube]
	I0930 11:12:29.533329   26946 provision.go:177] copyRemoteCerts
	I0930 11:12:29.533387   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:12:29.533409   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.535946   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.536287   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.536327   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.536541   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.536745   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.536906   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.537054   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:29.625264   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:12:29.625353   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:12:29.651589   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:12:29.651644   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:12:29.677526   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:12:29.677634   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:12:29.708210   26946 provision.go:87] duration metric: took 364.395657ms to configureAuth
	I0930 11:12:29.708246   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:12:29.708446   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:29.708540   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.711111   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.711545   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.711578   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.711743   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.711914   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.712073   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.712191   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.712381   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:29.712587   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:29.712611   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:12:29.956548   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:12:29.956576   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:12:29.956585   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetURL
	I0930 11:12:29.957861   26946 main.go:141] libmachine: (ha-033260-m02) DBG | Using libvirt version 6000000
	I0930 11:12:29.959943   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.960349   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.960376   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.960589   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:12:29.960605   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:12:29.960611   26946 client.go:171] duration metric: took 25.091713434s to LocalClient.Create
	I0930 11:12:29.960635   26946 start.go:167] duration metric: took 25.091779085s to libmachine.API.Create "ha-033260"
	I0930 11:12:29.960649   26946 start.go:293] postStartSetup for "ha-033260-m02" (driver="kvm2")
	I0930 11:12:29.960663   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:12:29.960682   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:29.960894   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:12:29.960921   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:29.962943   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.963366   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:29.963390   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:29.963547   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:29.963747   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:29.963887   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:29.963995   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.049684   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:12:30.054345   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:12:30.054373   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:12:30.054430   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:12:30.054507   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:12:30.054516   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:12:30.054592   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:12:30.064685   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:12:30.090069   26946 start.go:296] duration metric: took 129.405576ms for postStartSetup
	I0930 11:12:30.090127   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:12:30.090769   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:30.093475   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.093805   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.093836   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.094011   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:12:30.094269   26946 start.go:128] duration metric: took 25.244614564s to createHost
	I0930 11:12:30.094293   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:30.096188   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.096490   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.096524   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.096656   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.096825   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.096963   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.097093   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.097253   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:12:30.097426   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:12:30.097439   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:12:30.206856   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694750.184612585
	
	I0930 11:12:30.206885   26946 fix.go:216] guest clock: 1727694750.184612585
	I0930 11:12:30.206895   26946 fix.go:229] Guest: 2024-09-30 11:12:30.184612585 +0000 UTC Remote: 2024-09-30 11:12:30.094281951 +0000 UTC m=+73.159334041 (delta=90.330634ms)
	I0930 11:12:30.206915   26946 fix.go:200] guest clock delta is within tolerance: 90.330634ms
	I0930 11:12:30.206922   26946 start.go:83] releasing machines lock for "ha-033260-m02", held for 25.357361614s
	I0930 11:12:30.206944   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.207256   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:30.209590   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.209935   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.209964   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.212335   26946 out.go:177] * Found network options:
	I0930 11:12:30.213673   26946 out.go:177]   - NO_PROXY=192.168.39.249
	W0930 11:12:30.215021   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:12:30.215056   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215673   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215843   26946 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:12:30.215938   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:12:30.215976   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	W0930 11:12:30.215983   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:12:30.216054   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:12:30.216075   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:12:30.218771   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.218983   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219125   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.219147   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219360   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.219434   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:30.219459   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:30.219516   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.219662   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:12:30.219670   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.219831   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:12:30.219846   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.219963   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:12:30.220088   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:12:30.454192   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:12:30.462288   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:12:30.462348   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:12:30.479853   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:12:30.479878   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:12:30.479941   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:12:30.496617   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:12:30.512078   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:12:30.512142   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:12:30.526557   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:12:30.541136   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:12:30.655590   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:12:30.814049   26946 docker.go:233] disabling docker service ...
	I0930 11:12:30.814123   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:12:30.829972   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:12:30.844068   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:12:30.969831   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:12:31.096443   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:12:31.111612   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:12:31.131553   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:12:31.131621   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.143596   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:12:31.143658   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.156112   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.167422   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.179559   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:12:31.192037   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.203507   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.222188   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:12:31.234115   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:12:31.245344   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:12:31.245401   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:12:31.259589   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:12:31.269907   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:31.388443   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:12:31.482864   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:12:31.482933   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:12:31.487957   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:12:31.488026   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:12:31.492173   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:12:31.530740   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:12:31.530821   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:12:31.560435   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:12:31.592377   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:12:31.593888   26946 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:12:31.595254   26946 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:12:31.598165   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:31.598504   26946 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:12:19 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:12:31.598535   26946 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:12:31.598710   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:12:31.603081   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:31.616231   26946 mustload.go:65] Loading cluster: ha-033260
	I0930 11:12:31.616424   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:31.616676   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:31.616714   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:31.631793   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46637
	I0930 11:12:31.632254   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:31.632734   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:31.632757   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:31.633092   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:31.633272   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:12:31.634860   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:31.635130   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:31.635170   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:31.649687   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0930 11:12:31.650053   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:31.650497   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:31.650520   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:31.650803   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:31.650951   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:31.651118   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.3
	I0930 11:12:31.651130   26946 certs.go:194] generating shared ca certs ...
	I0930 11:12:31.651148   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.651260   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:12:31.651304   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:12:31.651313   26946 certs.go:256] generating profile certs ...
	I0930 11:12:31.651410   26946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:12:31.651435   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87
	I0930 11:12:31.651449   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.254]
	I0930 11:12:31.912914   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 ...
	I0930 11:12:31.912947   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87: {Name:mk5789d867ee86689334498533835b6baa525e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.913110   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87 ...
	I0930 11:12:31.913123   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87: {Name:mkcd56431095ebd059864bd581ed7c141670cf4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:12:31.913195   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.e8d40a87 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:12:31.913335   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.e8d40a87 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:12:31.913463   26946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:12:31.913478   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:12:31.913490   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:12:31.913500   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:12:31.913510   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:12:31.913520   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:12:31.913529   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:12:31.913539   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:12:31.913551   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:12:31.913591   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:12:31.913648   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:12:31.913661   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:12:31.913690   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:12:31.913712   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:12:31.913735   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:12:31.913780   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:12:31.913806   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:31.913824   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:12:31.913836   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:12:31.913865   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:31.917099   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:31.917453   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:31.917482   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:31.917675   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:31.917892   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:31.918041   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:31.918169   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:31.994019   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:12:31.999621   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:12:32.012410   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:12:32.017661   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:12:32.028991   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:12:32.034566   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:12:32.047607   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:12:32.052664   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:12:32.069473   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:12:32.074705   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:12:32.086100   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:12:32.090557   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:12:32.103048   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:12:32.132371   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:12:32.159806   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:12:32.185933   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:12:32.210826   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 11:12:32.236862   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:12:32.262441   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:12:32.289773   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:12:32.318287   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:12:32.347371   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:12:32.372327   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:12:32.397781   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:12:32.415260   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:12:32.433137   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:12:32.450661   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:12:32.467444   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:12:32.484994   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:12:32.503412   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:12:32.522919   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:12:32.529057   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:12:32.541643   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.546691   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.546753   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:12:32.553211   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:12:32.565054   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:12:32.576855   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.581764   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.581818   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:12:32.588983   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:12:32.602082   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:12:32.613340   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.617722   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.617775   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:12:32.623445   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:12:32.635275   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:12:32.639755   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:12:32.639812   26946 kubeadm.go:934] updating node {m02 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 11:12:32.639905   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:12:32.639928   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:12:32.639958   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:12:32.657152   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:12:32.657231   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:12:32.657301   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:12:32.669072   26946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 11:12:32.669126   26946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 11:12:32.681078   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 11:12:32.681102   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:12:32.681147   26946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0930 11:12:32.681159   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:12:32.681202   26946 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0930 11:12:32.685896   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 11:12:32.685930   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 11:12:33.355089   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:12:33.355169   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:12:33.360551   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 11:12:33.360593   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 11:12:33.497331   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:12:33.536292   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:12:33.536381   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:12:33.556993   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 11:12:33.557034   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 11:12:33.963212   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:12:33.973956   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0930 11:12:33.992407   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:12:34.010174   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:12:34.027647   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:12:34.031715   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:12:34.045021   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:34.164493   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:34.181854   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:12:34.182385   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:12:34.182436   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:12:34.197448   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34395
	I0930 11:12:34.197925   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:12:34.198415   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:12:34.198439   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:12:34.198777   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:12:34.199019   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:12:34.199179   26946 start.go:317] joinCluster: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:12:34.199281   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 11:12:34.199296   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:12:34.202318   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:34.202754   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:12:34.202783   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:12:34.202947   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:12:34.203150   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:12:34.203332   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:12:34.203477   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:12:34.356774   26946 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:34.356813   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hn6im1.2otceyiojx5fmqqd --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m02 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443"
	I0930 11:12:56.361665   26946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hn6im1.2otceyiojx5fmqqd --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m02 --control-plane --apiserver-advertise-address=192.168.39.3 --apiserver-bind-port=8443": (22.004830324s)
	I0930 11:12:56.361703   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 11:12:57.091049   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260-m02 minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=false
	I0930 11:12:57.252660   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-033260-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 11:12:57.383009   26946 start.go:319] duration metric: took 23.183825523s to joinCluster
	I0930 11:12:57.383083   26946 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:12:57.383372   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:12:57.384696   26946 out.go:177] * Verifying Kubernetes components...
	I0930 11:12:57.385781   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:12:57.652948   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:12:57.700673   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:12:57.700909   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:12:57.700967   26946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:12:57.701166   26946 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:12:57.701263   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:57.701272   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:57.701283   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:57.701288   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:57.710787   26946 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0930 11:12:58.201703   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:58.201723   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:58.201733   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:58.201738   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:58.218761   26946 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0930 11:12:58.701415   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:58.701436   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:58.701444   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:58.701447   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:58.707425   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:12:59.202375   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:59.202398   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:59.202410   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:59.202416   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:59.206657   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:12:59.701590   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:12:59.701611   26946 round_trippers.go:469] Request Headers:
	I0930 11:12:59.701635   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:12:59.701642   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:12:59.706264   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:12:59.707024   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:00.201877   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:00.201901   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:00.201917   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:00.201924   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:00.205419   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:00.701357   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:00.701378   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:00.701386   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:00.701391   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:00.706252   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:01.202282   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:01.202307   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:01.202319   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:01.202325   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:01.206013   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:01.701738   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:01.701760   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:01.701768   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:01.701773   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:01.705302   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:02.202004   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:02.202030   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:02.202043   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:02.202051   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:02.205535   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:02.206136   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:02.701406   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:02.701427   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:02.701436   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:02.701440   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:02.704929   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:03.202160   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:03.202189   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:03.202198   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:03.202204   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:03.205838   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:03.701797   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:03.701821   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:03.701832   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:03.701841   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:03.706107   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:04.201592   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:04.201623   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:04.201634   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:04.201641   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:04.204858   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:04.701789   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:04.701812   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:04.701825   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:04.701831   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:04.710541   26946 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:13:04.711317   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:05.202211   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:05.202237   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:05.202248   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:05.202255   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:05.206000   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:05.702240   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:05.702263   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:05.702272   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:05.702276   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:05.713473   26946 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0930 11:13:06.201370   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:06.201398   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:06.201412   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:06.201421   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:06.205062   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:06.702136   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:06.702157   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:06.702170   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:06.702178   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:06.707226   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:07.201911   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:07.201933   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:07.201941   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:07.201947   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:07.205398   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:07.206056   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:07.702203   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:07.702228   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:07.702236   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:07.702240   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:07.705652   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:08.201364   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:08.201385   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:08.201393   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:08.201397   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:08.204682   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:08.701564   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:08.701585   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:08.701593   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:08.701597   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:08.704941   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:09.201826   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:09.201874   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:09.201887   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:09.201892   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:09.205730   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:09.206265   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:09.701548   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:09.701576   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:09.701584   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:09.701588   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:09.704970   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:10.202351   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:10.202382   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:10.202393   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:10.202402   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:10.205886   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:10.701694   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:10.701717   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:10.701725   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:10.701729   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:10.705252   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:11.202235   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:11.202256   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:11.202264   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:11.202267   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:11.205904   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:11.206456   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:11.701817   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:11.701840   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:11.701848   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:11.701852   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:11.705418   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:12.202233   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:12.202257   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:12.202267   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:12.202273   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:12.206552   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:12.701910   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:12.701932   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:12.701940   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:12.701944   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:12.705423   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.201690   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:13.201715   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:13.201727   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:13.201733   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:13.205360   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.701378   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:13.701402   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:13.701410   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:13.701416   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:13.704921   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:13.705712   26946 node_ready.go:53] node "ha-033260-m02" has status "Ready":"False"
	I0930 11:13:14.202280   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.202303   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.202313   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.202317   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.206153   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.701500   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.701536   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.701545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.701549   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.705110   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.705891   26946 node_ready.go:49] node "ha-033260-m02" has status "Ready":"True"
	I0930 11:13:14.705919   26946 node_ready.go:38] duration metric: took 17.004728232s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:13:14.705930   26946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:13:14.706003   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:14.706012   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.706019   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.706027   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.710637   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:14.717034   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.717112   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:13:14.717120   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.717127   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.717132   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.720167   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.720847   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.720863   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.720870   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.720874   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.723869   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:13:14.724515   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.724535   26946 pod_ready.go:82] duration metric: took 7.4758ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.724545   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.724613   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:13:14.724621   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.724628   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.724634   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.727903   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.728724   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.728741   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.728751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.728757   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.731653   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:13:14.732553   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.732574   26946 pod_ready.go:82] duration metric: took 8.020759ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.732586   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.732653   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:13:14.732664   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.732674   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.732682   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.735972   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.736968   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:14.736990   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.737001   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.737006   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.742593   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:14.743126   26946 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.743157   26946 pod_ready.go:82] duration metric: took 10.560613ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.743170   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.743261   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:13:14.743274   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.743284   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.743295   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.746988   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:14.747647   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:14.747666   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.747678   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.747685   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.752616   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:14.753409   26946 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:14.753424   26946 pod_ready.go:82] duration metric: took 10.242469ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.753437   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:14.901974   26946 request.go:632] Waited for 148.458979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:13:14.902036   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:13:14.902043   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:14.902055   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:14.902060   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:14.905987   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.101905   26946 request.go:632] Waited for 195.35281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.101994   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.102002   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.102014   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.102020   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.106060   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:15.106613   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.106631   26946 pod_ready.go:82] duration metric: took 353.188275ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.106640   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.301775   26946 request.go:632] Waited for 195.071866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:13:15.301852   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:13:15.301859   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.301869   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.301877   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.305432   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.502470   26946 request.go:632] Waited for 196.425957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:15.502545   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:15.502550   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.502559   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.502564   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.506368   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.506795   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.506815   26946 pod_ready.go:82] duration metric: took 400.168693ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.506824   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.702050   26946 request.go:632] Waited for 195.162388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:13:15.702133   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:13:15.702141   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.702152   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.702163   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.705891   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.901957   26946 request.go:632] Waited for 195.415244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.902015   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:15.902032   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:15.902045   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:15.902050   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:15.905760   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:15.906550   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:15.906568   26946 pod_ready.go:82] duration metric: took 399.738814ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:15.906577   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.101960   26946 request.go:632] Waited for 195.295618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:13:16.102015   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:13:16.102020   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.102027   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.102034   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.105657   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.301949   26946 request.go:632] Waited for 195.400353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.302010   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.302015   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.302022   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.302028   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.306149   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:16.306664   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:16.306684   26946 pod_ready.go:82] duration metric: took 400.100909ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.306693   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.501852   26946 request.go:632] Waited for 195.093896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:13:16.501929   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:13:16.501936   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.501944   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.501948   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.505624   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.702111   26946 request.go:632] Waited for 195.755005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.702172   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:16.702201   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.702232   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.702242   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.706191   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:16.706772   26946 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:16.706793   26946 pod_ready.go:82] duration metric: took 400.093034ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.706806   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:16.901822   26946 request.go:632] Waited for 194.939903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:13:16.901874   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:13:16.901878   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:16.901886   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:16.901890   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:16.905939   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:17.102468   26946 request.go:632] Waited for 195.869654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.102551   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.102559   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.102570   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.102576   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.105889   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.106573   26946 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.106594   26946 pod_ready.go:82] duration metric: took 399.778126ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.106605   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.301593   26946 request.go:632] Waited for 194.913576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:13:17.301653   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:13:17.301658   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.301671   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.301678   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.305178   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.502249   26946 request.go:632] Waited for 196.387698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.502326   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:13:17.502350   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.502358   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.502362   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.505833   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.506907   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.506935   26946 pod_ready.go:82] duration metric: took 400.319251ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.506948   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.701919   26946 request.go:632] Waited for 194.9063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:13:17.701999   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:13:17.702006   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.702017   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.702028   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.705520   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:13:17.902402   26946 request.go:632] Waited for 196.207639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:17.902477   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:13:17.902485   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.902500   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.902526   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.906656   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:17.907109   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:13:17.907128   26946 pod_ready.go:82] duration metric: took 400.172408ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:13:17.907142   26946 pod_ready.go:39] duration metric: took 3.201195785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:13:17.907159   26946 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:13:17.907218   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:13:17.923202   26946 api_server.go:72] duration metric: took 20.540084285s to wait for apiserver process to appear ...
	I0930 11:13:17.923232   26946 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:13:17.923251   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:13:17.929517   26946 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:13:17.929596   26946 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:13:17.929602   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:17.929631   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:17.929636   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:17.930581   26946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:13:17.930807   26946 api_server.go:141] control plane version: v1.31.1
	I0930 11:13:17.930834   26946 api_server.go:131] duration metric: took 7.593991ms to wait for apiserver health ...
	I0930 11:13:17.930843   26946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:13:18.102359   26946 request.go:632] Waited for 171.419304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.102425   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.102433   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.102442   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.102449   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.107679   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:13:18.114591   26946 system_pods.go:59] 17 kube-system pods found
	I0930 11:13:18.114717   26946 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:13:18.114749   26946 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:13:18.114780   26946 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:13:18.114803   26946 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:13:18.114826   26946 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:13:18.114841   26946 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:13:18.114876   26946 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:13:18.114899   26946 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:13:18.114915   26946 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:13:18.114935   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:13:18.114950   26946 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:13:18.114975   26946 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:13:18.114997   26946 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:13:18.115011   26946 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:13:18.115025   26946 system_pods.go:61] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:13:18.115059   26946 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:13:18.115132   26946 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:13:18.115146   26946 system_pods.go:74] duration metric: took 184.295086ms to wait for pod list to return data ...
	I0930 11:13:18.115155   26946 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:13:18.301606   26946 request.go:632] Waited for 186.324564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:13:18.301691   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:13:18.301697   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.301704   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.301708   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.305792   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.306031   26946 default_sa.go:45] found service account: "default"
	I0930 11:13:18.306053   26946 default_sa.go:55] duration metric: took 190.887438ms for default service account to be created ...
	I0930 11:13:18.306064   26946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:13:18.502520   26946 request.go:632] Waited for 196.381212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.502574   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:13:18.502580   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.502589   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.502594   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.507606   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.513786   26946 system_pods.go:86] 17 kube-system pods found
	I0930 11:13:18.513814   26946 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:13:18.513820   26946 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:13:18.513824   26946 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:13:18.513828   26946 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:13:18.513832   26946 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:13:18.513835   26946 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:13:18.513838   26946 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:13:18.513842   26946 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:13:18.513845   26946 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:13:18.513849   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:13:18.513852   26946 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:13:18.513855   26946 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:13:18.513858   26946 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:13:18.513864   26946 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:13:18.513868   26946 system_pods.go:89] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:13:18.513871   26946 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:13:18.513874   26946 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:13:18.513883   26946 system_pods.go:126] duration metric: took 207.809961ms to wait for k8s-apps to be running ...
	I0930 11:13:18.513889   26946 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:13:18.513933   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:13:18.530491   26946 system_svc.go:56] duration metric: took 16.594303ms WaitForService to wait for kubelet
	I0930 11:13:18.530520   26946 kubeadm.go:582] duration metric: took 21.147406438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:13:18.530536   26946 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:13:18.701935   26946 request.go:632] Waited for 171.311845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:13:18.701998   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:13:18.702004   26946 round_trippers.go:469] Request Headers:
	I0930 11:13:18.702013   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:13:18.702020   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:13:18.706454   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:13:18.707258   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:13:18.707286   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:13:18.707302   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:13:18.707309   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:13:18.707315   26946 node_conditions.go:105] duration metric: took 176.773141ms to run NodePressure ...
	I0930 11:13:18.707329   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:13:18.707365   26946 start.go:255] writing updated cluster config ...
	I0930 11:13:18.709744   26946 out.go:201] 
	I0930 11:13:18.711365   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:18.711455   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:18.713157   26946 out.go:177] * Starting "ha-033260-m03" control-plane node in "ha-033260" cluster
	I0930 11:13:18.714611   26946 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:13:18.714636   26946 cache.go:56] Caching tarball of preloaded images
	I0930 11:13:18.714744   26946 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:13:18.714757   26946 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:13:18.714852   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:18.715040   26946 start.go:360] acquireMachinesLock for ha-033260-m03: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:13:18.715084   26946 start.go:364] duration metric: took 25.338µs to acquireMachinesLock for "ha-033260-m03"
	I0930 11:13:18.715101   26946 start.go:93] Provisioning new machine with config: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-
gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:13:18.715188   26946 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0930 11:13:18.716794   26946 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 11:13:18.716894   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:18.716928   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:18.732600   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42281
	I0930 11:13:18.733109   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:18.733561   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:18.733575   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:18.733910   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:18.734089   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:18.734238   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:18.734421   26946 start.go:159] libmachine.API.Create for "ha-033260" (driver="kvm2")
	I0930 11:13:18.734451   26946 client.go:168] LocalClient.Create starting
	I0930 11:13:18.734489   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 11:13:18.734529   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:13:18.734544   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:13:18.734600   26946 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 11:13:18.734619   26946 main.go:141] libmachine: Decoding PEM data...
	I0930 11:13:18.734631   26946 main.go:141] libmachine: Parsing certificate...
	I0930 11:13:18.734648   26946 main.go:141] libmachine: Running pre-create checks...
	I0930 11:13:18.734656   26946 main.go:141] libmachine: (ha-033260-m03) Calling .PreCreateCheck
	I0930 11:13:18.734797   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:18.735196   26946 main.go:141] libmachine: Creating machine...
	I0930 11:13:18.735209   26946 main.go:141] libmachine: (ha-033260-m03) Calling .Create
	I0930 11:13:18.735336   26946 main.go:141] libmachine: (ha-033260-m03) Creating KVM machine...
	I0930 11:13:18.736643   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found existing default KVM network
	I0930 11:13:18.736820   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found existing private KVM network mk-ha-033260
	I0930 11:13:18.736982   26946 main.go:141] libmachine: (ha-033260-m03) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 ...
	I0930 11:13:18.737011   26946 main.go:141] libmachine: (ha-033260-m03) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 11:13:18.737118   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:18.736992   27716 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:13:18.737204   26946 main.go:141] libmachine: (ha-033260-m03) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 11:13:18.965830   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:18.965684   27716 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa...
	I0930 11:13:19.182387   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:19.182221   27716 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/ha-033260-m03.rawdisk...
	I0930 11:13:19.182427   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Writing magic tar header
	I0930 11:13:19.182442   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Writing SSH key tar header
	I0930 11:13:19.182454   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:19.182378   27716 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 ...
	I0930 11:13:19.182548   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03
	I0930 11:13:19.182570   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 11:13:19.182578   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03 (perms=drwx------)
	I0930 11:13:19.182587   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 11:13:19.182596   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 11:13:19.182610   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 11:13:19.182620   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 11:13:19.182634   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:13:19.182647   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 11:13:19.182661   26946 main.go:141] libmachine: (ha-033260-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 11:13:19.182678   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 11:13:19.182687   26946 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:13:19.182699   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home/jenkins
	I0930 11:13:19.182796   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Checking permissions on dir: /home
	I0930 11:13:19.182820   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Skipping /home - not owner
	I0930 11:13:19.183716   26946 main.go:141] libmachine: (ha-033260-m03) define libvirt domain using xml: 
	I0930 11:13:19.183740   26946 main.go:141] libmachine: (ha-033260-m03) <domain type='kvm'>
	I0930 11:13:19.183766   26946 main.go:141] libmachine: (ha-033260-m03)   <name>ha-033260-m03</name>
	I0930 11:13:19.183787   26946 main.go:141] libmachine: (ha-033260-m03)   <memory unit='MiB'>2200</memory>
	I0930 11:13:19.183800   26946 main.go:141] libmachine: (ha-033260-m03)   <vcpu>2</vcpu>
	I0930 11:13:19.183806   26946 main.go:141] libmachine: (ha-033260-m03)   <features>
	I0930 11:13:19.183817   26946 main.go:141] libmachine: (ha-033260-m03)     <acpi/>
	I0930 11:13:19.183827   26946 main.go:141] libmachine: (ha-033260-m03)     <apic/>
	I0930 11:13:19.183836   26946 main.go:141] libmachine: (ha-033260-m03)     <pae/>
	I0930 11:13:19.183845   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.183853   26946 main.go:141] libmachine: (ha-033260-m03)   </features>
	I0930 11:13:19.183861   26946 main.go:141] libmachine: (ha-033260-m03)   <cpu mode='host-passthrough'>
	I0930 11:13:19.183868   26946 main.go:141] libmachine: (ha-033260-m03)   
	I0930 11:13:19.183881   26946 main.go:141] libmachine: (ha-033260-m03)   </cpu>
	I0930 11:13:19.183892   26946 main.go:141] libmachine: (ha-033260-m03)   <os>
	I0930 11:13:19.183902   26946 main.go:141] libmachine: (ha-033260-m03)     <type>hvm</type>
	I0930 11:13:19.183911   26946 main.go:141] libmachine: (ha-033260-m03)     <boot dev='cdrom'/>
	I0930 11:13:19.183924   26946 main.go:141] libmachine: (ha-033260-m03)     <boot dev='hd'/>
	I0930 11:13:19.183936   26946 main.go:141] libmachine: (ha-033260-m03)     <bootmenu enable='no'/>
	I0930 11:13:19.183942   26946 main.go:141] libmachine: (ha-033260-m03)   </os>
	I0930 11:13:19.183951   26946 main.go:141] libmachine: (ha-033260-m03)   <devices>
	I0930 11:13:19.183961   26946 main.go:141] libmachine: (ha-033260-m03)     <disk type='file' device='cdrom'>
	I0930 11:13:19.183975   26946 main.go:141] libmachine: (ha-033260-m03)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/boot2docker.iso'/>
	I0930 11:13:19.183985   26946 main.go:141] libmachine: (ha-033260-m03)       <target dev='hdc' bus='scsi'/>
	I0930 11:13:19.183993   26946 main.go:141] libmachine: (ha-033260-m03)       <readonly/>
	I0930 11:13:19.184007   26946 main.go:141] libmachine: (ha-033260-m03)     </disk>
	I0930 11:13:19.184019   26946 main.go:141] libmachine: (ha-033260-m03)     <disk type='file' device='disk'>
	I0930 11:13:19.184028   26946 main.go:141] libmachine: (ha-033260-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 11:13:19.184041   26946 main.go:141] libmachine: (ha-033260-m03)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/ha-033260-m03.rawdisk'/>
	I0930 11:13:19.184052   26946 main.go:141] libmachine: (ha-033260-m03)       <target dev='hda' bus='virtio'/>
	I0930 11:13:19.184065   26946 main.go:141] libmachine: (ha-033260-m03)     </disk>
	I0930 11:13:19.184076   26946 main.go:141] libmachine: (ha-033260-m03)     <interface type='network'>
	I0930 11:13:19.184137   26946 main.go:141] libmachine: (ha-033260-m03)       <source network='mk-ha-033260'/>
	I0930 11:13:19.184167   26946 main.go:141] libmachine: (ha-033260-m03)       <model type='virtio'/>
	I0930 11:13:19.184179   26946 main.go:141] libmachine: (ha-033260-m03)     </interface>
	I0930 11:13:19.184187   26946 main.go:141] libmachine: (ha-033260-m03)     <interface type='network'>
	I0930 11:13:19.184197   26946 main.go:141] libmachine: (ha-033260-m03)       <source network='default'/>
	I0930 11:13:19.184205   26946 main.go:141] libmachine: (ha-033260-m03)       <model type='virtio'/>
	I0930 11:13:19.184215   26946 main.go:141] libmachine: (ha-033260-m03)     </interface>
	I0930 11:13:19.184223   26946 main.go:141] libmachine: (ha-033260-m03)     <serial type='pty'>
	I0930 11:13:19.184242   26946 main.go:141] libmachine: (ha-033260-m03)       <target port='0'/>
	I0930 11:13:19.184249   26946 main.go:141] libmachine: (ha-033260-m03)     </serial>
	I0930 11:13:19.184259   26946 main.go:141] libmachine: (ha-033260-m03)     <console type='pty'>
	I0930 11:13:19.184267   26946 main.go:141] libmachine: (ha-033260-m03)       <target type='serial' port='0'/>
	I0930 11:13:19.184277   26946 main.go:141] libmachine: (ha-033260-m03)     </console>
	I0930 11:13:19.184285   26946 main.go:141] libmachine: (ha-033260-m03)     <rng model='virtio'>
	I0930 11:13:19.184297   26946 main.go:141] libmachine: (ha-033260-m03)       <backend model='random'>/dev/random</backend>
	I0930 11:13:19.184305   26946 main.go:141] libmachine: (ha-033260-m03)     </rng>
	I0930 11:13:19.184313   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.184326   26946 main.go:141] libmachine: (ha-033260-m03)     
	I0930 11:13:19.184337   26946 main.go:141] libmachine: (ha-033260-m03)   </devices>
	I0930 11:13:19.184344   26946 main.go:141] libmachine: (ha-033260-m03) </domain>
	I0930 11:13:19.184355   26946 main.go:141] libmachine: (ha-033260-m03) 
	I0930 11:13:19.191067   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:09:7f:ae in network default
	I0930 11:13:19.191719   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring networks are active...
	I0930 11:13:19.191738   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:19.192592   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring network default is active
	I0930 11:13:19.192924   26946 main.go:141] libmachine: (ha-033260-m03) Ensuring network mk-ha-033260 is active
	I0930 11:13:19.193268   26946 main.go:141] libmachine: (ha-033260-m03) Getting domain xml...
	I0930 11:13:19.193941   26946 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:13:20.468738   26946 main.go:141] libmachine: (ha-033260-m03) Waiting to get IP...
	I0930 11:13:20.469515   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:20.469944   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:20.469970   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:20.469926   27716 retry.go:31] will retry after 232.398954ms: waiting for machine to come up
	I0930 11:13:20.704544   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:20.704996   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:20.705026   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:20.704955   27716 retry.go:31] will retry after 380.728938ms: waiting for machine to come up
	I0930 11:13:21.087407   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.087831   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.087853   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.087810   27716 retry.go:31] will retry after 405.871711ms: waiting for machine to come up
	I0930 11:13:21.495366   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.495857   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.495885   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.495810   27716 retry.go:31] will retry after 380.57456ms: waiting for machine to come up
	I0930 11:13:21.878262   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:21.878697   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:21.878718   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:21.878678   27716 retry.go:31] will retry after 486.639816ms: waiting for machine to come up
	I0930 11:13:22.367485   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:22.367998   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:22.368026   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:22.367946   27716 retry.go:31] will retry after 818.869274ms: waiting for machine to come up
	I0930 11:13:23.187832   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:23.188286   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:23.188306   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:23.188246   27716 retry.go:31] will retry after 870.541242ms: waiting for machine to come up
	I0930 11:13:24.060866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:24.061364   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:24.061403   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:24.061339   27716 retry.go:31] will retry after 1.026163442s: waiting for machine to come up
	I0930 11:13:25.089407   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:25.089859   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:25.089889   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:25.089789   27716 retry.go:31] will retry after 1.677341097s: waiting for machine to come up
	I0930 11:13:26.769716   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:26.770127   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:26.770173   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:26.770102   27716 retry.go:31] will retry after 2.102002194s: waiting for machine to come up
	I0930 11:13:28.873495   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:28.874089   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:28.874118   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:28.874042   27716 retry.go:31] will retry after 2.512249945s: waiting for machine to come up
	I0930 11:13:31.388375   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:31.388813   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:31.388842   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:31.388766   27716 retry.go:31] will retry after 3.025058152s: waiting for machine to come up
	I0930 11:13:34.415391   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:34.415806   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:34.415826   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:34.415764   27716 retry.go:31] will retry after 3.6491044s: waiting for machine to come up
	I0930 11:13:38.067512   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:38.067932   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:13:38.067957   26946 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:13:38.067891   27716 retry.go:31] will retry after 5.462753525s: waiting for machine to come up
	I0930 11:13:43.535257   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.535767   26946 main.go:141] libmachine: (ha-033260-m03) Found IP for machine: 192.168.39.238
	I0930 11:13:43.535792   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.535800   26946 main.go:141] libmachine: (ha-033260-m03) Reserving static IP address...
	I0930 11:13:43.536253   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"} in network mk-ha-033260
	I0930 11:13:43.612168   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:13:43.612200   26946 main.go:141] libmachine: (ha-033260-m03) Reserved static IP address: 192.168.39.238
	I0930 11:13:43.612213   26946 main.go:141] libmachine: (ha-033260-m03) Waiting for SSH to be available...
	I0930 11:13:43.614758   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:43.615073   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260
	I0930 11:13:43.615102   26946 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find defined IP address of network mk-ha-033260 interface with MAC address 52:54:00:f2:70:c8
	I0930 11:13:43.615180   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:13:43.615208   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:13:43.615240   26946 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:13:43.615252   26946 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:13:43.615269   26946 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:13:43.619189   26946 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: exit status 255: 
	I0930 11:13:43.619212   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0930 11:13:43.619222   26946 main.go:141] libmachine: (ha-033260-m03) DBG | command : exit 0
	I0930 11:13:43.619233   26946 main.go:141] libmachine: (ha-033260-m03) DBG | err     : exit status 255
	I0930 11:13:43.619246   26946 main.go:141] libmachine: (ha-033260-m03) DBG | output  : 
	I0930 11:13:46.621877   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:13:46.624327   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.624849   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.624873   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.625052   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:13:46.625075   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:13:46.625113   26946 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:13:46.625125   26946 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:13:46.625137   26946 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:13:46.749932   26946 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 11:13:46.750211   26946 main.go:141] libmachine: (ha-033260-m03) KVM machine creation complete!
	I0930 11:13:46.750551   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:46.751116   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:46.751371   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:46.751553   26946 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 11:13:46.751568   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:13:46.752698   26946 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 11:13:46.752714   26946 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 11:13:46.752721   26946 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 11:13:46.752728   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.755296   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.755714   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.755738   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.755877   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.756027   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.756136   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.756284   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.756448   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.756639   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.756651   26946 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 11:13:46.857068   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:13:46.857090   26946 main.go:141] libmachine: Detecting the provisioner...
	I0930 11:13:46.857097   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.859904   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.860340   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.860372   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.860564   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.860899   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.861065   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.861200   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.861350   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.861511   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.861526   26946 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 11:13:46.970453   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 11:13:46.970520   26946 main.go:141] libmachine: found compatible host: buildroot
	I0930 11:13:46.970534   26946 main.go:141] libmachine: Provisioning with buildroot...
	I0930 11:13:46.970543   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:46.970766   26946 buildroot.go:166] provisioning hostname "ha-033260-m03"
	I0930 11:13:46.970791   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:46.970955   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:46.973539   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.973929   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:46.973956   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:46.974221   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:46.974372   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.974556   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:46.974665   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:46.974786   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:46.974938   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:46.974953   26946 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m03 && echo "ha-033260-m03" | sudo tee /etc/hostname
	I0930 11:13:47.087604   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m03
	
	I0930 11:13:47.087636   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.090559   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.090866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.090895   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.091089   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.091283   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.091400   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.091516   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.091649   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.091811   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.091834   26946 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:13:47.203919   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:13:47.203950   26946 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:13:47.203969   26946 buildroot.go:174] setting up certificates
	I0930 11:13:47.203977   26946 provision.go:84] configureAuth start
	I0930 11:13:47.203986   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:13:47.204270   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:47.207236   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.207589   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.207618   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.207750   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.210196   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.210560   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.210587   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.210754   26946 provision.go:143] copyHostCerts
	I0930 11:13:47.210783   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:13:47.210816   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:13:47.210826   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:13:47.210895   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:13:47.210966   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:13:47.210983   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:13:47.210989   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:13:47.211013   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:13:47.211059   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:13:47.211076   26946 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:13:47.211082   26946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:13:47.211104   26946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:13:47.211150   26946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m03 san=[127.0.0.1 192.168.39.238 ha-033260-m03 localhost minikube]
	I0930 11:13:47.437398   26946 provision.go:177] copyRemoteCerts
	I0930 11:13:47.437447   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:13:47.437470   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.440541   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.440922   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.440953   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.441156   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.441379   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.441583   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.441760   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:47.524024   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:13:47.524094   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:13:47.548921   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:13:47.548991   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:13:47.573300   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:13:47.573362   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:13:47.597885   26946 provision.go:87] duration metric: took 393.894244ms to configureAuth
	I0930 11:13:47.597913   26946 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:13:47.598137   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:47.598221   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.600783   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.601100   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.601141   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.601308   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.601511   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.601694   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.601837   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.601988   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.602139   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.602153   26946 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:13:47.824726   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:13:47.824757   26946 main.go:141] libmachine: Checking connection to Docker...
	I0930 11:13:47.824767   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetURL
	I0930 11:13:47.826205   26946 main.go:141] libmachine: (ha-033260-m03) DBG | Using libvirt version 6000000
	I0930 11:13:47.829313   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.829732   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.829758   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.829979   26946 main.go:141] libmachine: Docker is up and running!
	I0930 11:13:47.829995   26946 main.go:141] libmachine: Reticulating splines...
	I0930 11:13:47.830002   26946 client.go:171] duration metric: took 29.095541403s to LocalClient.Create
	I0930 11:13:47.830029   26946 start.go:167] duration metric: took 29.095609634s to libmachine.API.Create "ha-033260"
	I0930 11:13:47.830042   26946 start.go:293] postStartSetup for "ha-033260-m03" (driver="kvm2")
	I0930 11:13:47.830059   26946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:13:47.830080   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:47.830308   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:13:47.830331   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.832443   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.832840   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.832866   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.833032   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.833204   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.833336   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.833448   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:47.911982   26946 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:13:47.916413   26946 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:13:47.916434   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:13:47.916512   26946 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:13:47.916604   26946 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:13:47.916615   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:13:47.916726   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:13:47.926360   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:13:47.951398   26946 start.go:296] duration metric: took 121.337458ms for postStartSetup
	I0930 11:13:47.951443   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:13:47.951959   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:47.954522   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.954882   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.954902   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.955203   26946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:13:47.955450   26946 start.go:128] duration metric: took 29.240250665s to createHost
	I0930 11:13:47.955475   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:47.957714   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.958054   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:47.958091   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:47.958262   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:47.958436   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.958562   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:47.958708   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:47.958822   26946 main.go:141] libmachine: Using SSH client type: native
	I0930 11:13:47.958982   26946 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:13:47.958994   26946 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:13:48.062976   26946 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727694828.042605099
	
	I0930 11:13:48.062999   26946 fix.go:216] guest clock: 1727694828.042605099
	I0930 11:13:48.063009   26946 fix.go:229] Guest: 2024-09-30 11:13:48.042605099 +0000 UTC Remote: 2024-09-30 11:13:47.955462433 +0000 UTC m=+151.020514213 (delta=87.142666ms)
	I0930 11:13:48.063030   26946 fix.go:200] guest clock delta is within tolerance: 87.142666ms
	I0930 11:13:48.063037   26946 start.go:83] releasing machines lock for "ha-033260-m03", held for 29.347943498s
	I0930 11:13:48.063057   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.063295   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:48.065833   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.066130   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.066166   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.068440   26946 out.go:177] * Found network options:
	I0930 11:13:48.070194   26946 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3
	W0930 11:13:48.071578   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:13:48.071602   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:13:48.071621   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072253   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072426   26946 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:13:48.072506   26946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:13:48.072552   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:13:48.072605   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:13:48.072630   26946 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:13:48.072698   26946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:13:48.072719   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:13:48.075267   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075365   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075641   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.075667   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075715   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:48.075746   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:48.075778   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:48.075958   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:48.075973   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:13:48.076123   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:48.076126   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:13:48.076233   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:48.076311   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:13:48.076464   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:13:48.315424   26946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:13:48.322103   26946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:13:48.322167   26946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:13:48.340329   26946 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:13:48.340354   26946 start.go:495] detecting cgroup driver to use...
	I0930 11:13:48.340419   26946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:13:48.356866   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:13:48.372077   26946 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:13:48.372139   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:13:48.387616   26946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:13:48.402259   26946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:13:48.523588   26946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:13:48.671634   26946 docker.go:233] disabling docker service ...
	I0930 11:13:48.671693   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:13:48.687483   26946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:13:48.702106   26946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:13:48.848121   26946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:13:48.976600   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:13:48.991745   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:13:49.014226   26946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:13:49.014303   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.025816   26946 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:13:49.025892   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.038153   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.049762   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.061409   26946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:13:49.073521   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.084788   26946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.104074   26946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:13:49.116909   26946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:13:49.129116   26946 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:13:49.129180   26946 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:13:49.143704   26946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:13:49.155037   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:13:49.274882   26946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:13:49.369751   26946 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:13:49.369822   26946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:13:49.375071   26946 start.go:563] Will wait 60s for crictl version
	I0930 11:13:49.375129   26946 ssh_runner.go:195] Run: which crictl
	I0930 11:13:49.379040   26946 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:13:49.421444   26946 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:13:49.421545   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:13:49.450271   26946 ssh_runner.go:195] Run: crio --version
	I0930 11:13:49.481221   26946 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:13:49.482604   26946 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:13:49.483828   26946 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:13:49.485093   26946 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:13:49.488106   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:49.488528   26946 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:13:49.488555   26946 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:13:49.488791   26946 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:13:49.493484   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:13:49.506933   26946 mustload.go:65] Loading cluster: ha-033260
	I0930 11:13:49.507212   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:13:49.507471   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:49.507506   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:49.522665   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0930 11:13:49.523038   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:49.523529   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:49.523558   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:49.523847   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:49.524064   26946 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:13:49.525464   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:13:49.525875   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:49.525916   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:49.540657   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0930 11:13:49.541129   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:49.541659   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:49.541680   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:49.541991   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:49.542172   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:13:49.542336   26946 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.238
	I0930 11:13:49.542347   26946 certs.go:194] generating shared ca certs ...
	I0930 11:13:49.542362   26946 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.542476   26946 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:13:49.542515   26946 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:13:49.542525   26946 certs.go:256] generating profile certs ...
	I0930 11:13:49.542591   26946 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:13:49.542615   26946 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37
	I0930 11:13:49.542628   26946 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:13:49.661476   26946 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 ...
	I0930 11:13:49.661515   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37: {Name:mk149c204bf31f855e781b37ed00d2d45943dc74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.661762   26946 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37 ...
	I0930 11:13:49.661785   26946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37: {Name:mka1c6759c2661bfc3ab07f3168b7da60e9fc340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:13:49.661922   26946 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.93938a37 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:13:49.662094   26946 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:13:49.662275   26946 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:13:49.662294   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:13:49.662313   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:13:49.662333   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:13:49.662351   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:13:49.662368   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:13:49.662384   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:13:49.662452   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:13:49.677713   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:13:49.677801   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:13:49.677835   26946 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:13:49.677845   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:13:49.677866   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:13:49.677888   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:13:49.677908   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:13:49.677944   26946 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:13:49.677971   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:13:49.677983   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:13:49.677997   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:49.678030   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:13:49.681296   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:49.681887   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:13:49.681920   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:49.682144   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:13:49.682365   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:13:49.682543   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:13:49.682691   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:13:49.766051   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:13:49.771499   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:13:49.783878   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:13:49.789403   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:13:49.801027   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:13:49.806774   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:13:49.824334   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:13:49.828617   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:13:49.838958   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:13:49.843225   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:13:49.853655   26946 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:13:49.857681   26946 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:13:49.869752   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:13:49.897794   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:13:49.925363   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:13:49.951437   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:13:49.978863   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:13:50.005498   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:13:50.030426   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:13:50.055825   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:13:50.080625   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:13:50.113315   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:13:50.142931   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:13:50.168186   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:13:50.185792   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:13:50.203667   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:13:50.222202   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:13:50.241795   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:13:50.260704   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:13:50.278865   26946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:13:50.296763   26946 ssh_runner.go:195] Run: openssl version
	I0930 11:13:50.303234   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:13:50.314412   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.319228   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.319276   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:13:50.325090   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:13:50.337510   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:13:50.351103   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.356273   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.356331   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:13:50.362227   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:13:50.373066   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:13:50.384243   26946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.388958   26946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.389012   26946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:13:50.394820   26946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:13:50.406295   26946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:13:50.410622   26946 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:13:50.410674   26946 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.1 crio true true} ...
	I0930 11:13:50.410806   26946 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:13:50.410833   26946 kube-vip.go:115] generating kube-vip config ...
	I0930 11:13:50.410873   26946 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:13:50.426800   26946 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:13:50.426870   26946 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:13:50.426931   26946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:13:50.437767   26946 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 11:13:50.437827   26946 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 11:13:50.448545   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 11:13:50.448565   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 11:13:50.448591   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:13:50.448597   26946 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 11:13:50.448619   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:13:50.448655   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 11:13:50.448668   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 11:13:50.448599   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:13:50.460142   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0930 11:13:50.460178   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 11:13:50.460491   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0930 11:13:50.460521   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 11:13:50.475258   26946 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:13:50.475370   26946 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 11:13:50.603685   26946 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0930 11:13:50.603734   26946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 11:13:51.331864   26946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:13:51.343111   26946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:13:51.361905   26946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:13:51.380114   26946 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:13:51.398229   26946 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:13:51.402565   26946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:13:51.414789   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:13:51.547939   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:13:51.568598   26946 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:13:51.569032   26946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:13:51.569117   26946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:13:51.584541   26946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45637
	I0930 11:13:51.585019   26946 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:13:51.585485   26946 main.go:141] libmachine: Using API Version  1
	I0930 11:13:51.585506   26946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:13:51.585824   26946 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:13:51.586011   26946 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:13:51.586156   26946 start.go:317] joinCluster: &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:13:51.586275   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0930 11:13:51.586294   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:13:51.589730   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:51.590160   26946 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:13:51.590189   26946 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:13:51.590326   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:13:51.590673   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:13:51.590813   26946 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:13:51.590943   26946 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:13:51.742155   26946 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:13:51.742217   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ve4s5e.z27uafhrt4vwx76f --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443"
	I0930 11:14:14.534669   26946 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ve4s5e.z27uafhrt4vwx76f --discovery-token-ca-cert-hash sha256:cd44e8fdd740c8f48e1465c61d0873a6e2887c60a128a2d513ee0addae94f8a2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-033260-m03 --control-plane --apiserver-advertise-address=192.168.39.238 --apiserver-bind-port=8443": (22.792425292s)
	I0930 11:14:14.534703   26946 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0930 11:14:15.090933   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-033260-m03 minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=ha-033260 minikube.k8s.io/primary=false
	I0930 11:14:15.217971   26946 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-033260-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0930 11:14:15.356327   26946 start.go:319] duration metric: took 23.770167838s to joinCluster
	I0930 11:14:15.356406   26946 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:14:15.356782   26946 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:14:15.358117   26946 out.go:177] * Verifying Kubernetes components...
	I0930 11:14:15.359571   26946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:14:15.622789   26946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:14:15.640897   26946 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:14:15.641233   26946 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:14:15.641327   26946 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:14:15.641657   26946 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:14:15.641759   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:15.641771   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:15.641783   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:15.641790   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:15.644778   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:16.142790   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:16.142817   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:16.142829   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:16.142842   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:16.146568   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:16.642107   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:16.642131   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:16.642142   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:16.642147   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:16.648466   26946 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:14:17.142339   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:17.142362   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:17.142375   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:17.142381   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:17.146498   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:17.642900   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:17.642921   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:17.642930   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:17.642934   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:17.646792   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:17.647749   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:18.141856   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:18.141880   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:18.141889   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:18.141893   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:18.145059   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:18.641848   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:18.641883   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:18.641896   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:18.641905   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:18.645609   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:19.142000   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:19.142030   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:19.142041   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:19.142046   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:19.146124   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:19.642709   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:19.642734   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:19.642746   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:19.642751   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:19.647278   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:19.648375   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:20.142851   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:20.142871   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:20.142879   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:20.142883   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:20.146328   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:20.642913   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:20.642940   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:20.642954   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:20.642961   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:20.653974   26946 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:14:21.142909   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:21.142931   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:21.142942   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:21.142954   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:21.146862   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:21.642348   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:21.642373   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:21.642383   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:21.642388   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:21.647699   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:14:22.142178   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:22.142198   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:22.142206   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:22.142210   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:22.145760   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:22.146824   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:22.642895   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:22.642917   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:22.642925   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:22.642931   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:22.648085   26946 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:14:23.141847   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:23.141872   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:23.141883   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:23.141888   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:23.149699   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:23.641992   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:23.642013   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:23.642023   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:23.642029   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:23.645640   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:24.142073   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:24.142096   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:24.142104   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:24.142108   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:24.146322   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:24.146891   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:24.642695   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:24.642716   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:24.642724   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:24.642731   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:24.646216   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:25.142500   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:25.142538   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:25.142546   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:25.142552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:25.146687   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:25.642542   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:25.642566   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:25.642573   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:25.642577   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:25.646661   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:26.142499   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:26.142535   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:26.142545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:26.142552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:26.146202   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:26.147018   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:26.642712   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:26.642739   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:26.642751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:26.642756   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:26.646338   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:27.142246   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:27.142276   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:27.142286   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:27.142292   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:27.146473   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:27.642325   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:27.642347   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:27.642355   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:27.642359   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:27.646109   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:28.142885   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:28.142912   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:28.142923   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:28.142929   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:28.146499   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:28.147250   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:28.642625   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:28.642652   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:28.642663   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:28.642669   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:28.646618   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:29.142391   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:29.142412   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:29.142420   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:29.142424   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:29.146320   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:29.642615   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:29.642640   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:29.642649   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:29.642653   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:29.646130   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.142916   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:30.142938   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:30.142947   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:30.142951   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:30.146109   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.642863   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:30.642885   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:30.642893   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:30.642897   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:30.646458   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:30.647204   26946 node_ready.go:53] node "ha-033260-m03" has status "Ready":"False"
	I0930 11:14:31.142601   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:31.142623   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.142631   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.142635   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.146623   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.642077   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:31.642103   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.642114   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.642119   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.645322   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.645964   26946 node_ready.go:49] node "ha-033260-m03" has status "Ready":"True"
	I0930 11:14:31.645987   26946 node_ready.go:38] duration metric: took 16.004306964s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:14:31.645997   26946 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:14:31.646075   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:31.646090   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.646099   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.646106   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.653396   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:31.663320   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.663400   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:14:31.663405   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.663412   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.663420   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.666829   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.667522   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.667537   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.667544   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.667550   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.670668   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.671278   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.671301   26946 pod_ready.go:82] duration metric: took 7.951059ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.671309   26946 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.671362   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:14:31.671369   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.671376   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.671383   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.674317   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:31.675093   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.675107   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.675114   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.675120   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.678167   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.678702   26946 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.678717   26946 pod_ready.go:82] duration metric: took 7.402263ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.678725   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.678775   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:14:31.678782   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.678789   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.678794   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.682042   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.683033   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:31.683050   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.683060   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.683067   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.686124   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.686928   26946 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.686944   26946 pod_ready.go:82] duration metric: took 8.212366ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.686951   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.687047   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:14:31.687059   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.687068   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.687077   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.690190   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:31.690825   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:31.690840   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.690850   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.690858   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.693597   26946 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:14:31.694016   26946 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:31.694032   26946 pod_ready.go:82] duration metric: took 7.073598ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.694050   26946 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:31.842476   26946 request.go:632] Waited for 148.347924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:14:31.842535   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:14:31.842540   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:31.842547   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:31.842551   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:31.846779   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:32.042378   26946 request.go:632] Waited for 194.977116ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:32.042433   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:32.042441   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.042451   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.042460   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.046938   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:32.047883   26946 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.047901   26946 pod_ready.go:82] duration metric: took 353.843104ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.047915   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.242541   26946 request.go:632] Waited for 194.549595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:14:32.242605   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:14:32.242614   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.242625   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.242634   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.246270   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.443112   26946 request.go:632] Waited for 196.194005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:32.443180   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:32.443188   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.443196   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.443204   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.446839   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.447484   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.447503   26946 pod_ready.go:82] duration metric: took 399.580784ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.447514   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.642591   26946 request.go:632] Waited for 194.994624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:14:32.642658   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:14:32.642663   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.642670   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.642674   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.646484   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.842626   26946 request.go:632] Waited for 195.406068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:32.842682   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:32.842700   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:32.842723   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:32.842729   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:32.846693   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:32.847589   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:32.847611   26946 pod_ready.go:82] duration metric: took 400.088499ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:32.847622   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.042743   26946 request.go:632] Waited for 195.040991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:14:33.042794   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:14:33.042810   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.042822   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.042831   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.047437   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:33.242766   26946 request.go:632] Waited for 194.350243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:33.242826   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:33.242831   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.242838   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.242842   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.246530   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.247420   26946 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:33.247442   26946 pod_ready.go:82] duration metric: took 399.811844ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.247458   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.442488   26946 request.go:632] Waited for 194.945176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:14:33.442539   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:14:33.442545   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.442552   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.442555   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.446162   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.642540   26946 request.go:632] Waited for 195.369281ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:33.642603   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:33.642609   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.642615   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.642620   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.646221   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:33.646635   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:33.646655   26946 pod_ready.go:82] duration metric: took 399.188776ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.646667   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:33.843125   26946 request.go:632] Waited for 196.391494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:14:33.843216   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:14:33.843227   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:33.843238   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:33.843244   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:33.846706   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.042579   26946 request.go:632] Waited for 195.024865ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.042680   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.042689   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.042697   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.042701   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.046091   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.046788   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.046810   26946 pod_ready.go:82] duration metric: took 400.13538ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.046823   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.242282   26946 request.go:632] Waited for 195.389369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:14:34.242349   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:14:34.242356   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.242365   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.242370   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.246179   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.442166   26946 request.go:632] Waited for 195.280581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:34.442224   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:34.442230   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.442237   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.442240   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.445326   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.445954   26946 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.445978   26946 pod_ready.go:82] duration metric: took 399.145783ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.445991   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.643049   26946 request.go:632] Waited for 196.981464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:14:34.643124   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:14:34.643131   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.643141   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.643148   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.647040   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.843108   26946 request.go:632] Waited for 195.398341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.843190   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:34.843212   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:34.843227   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:34.843238   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:34.846825   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:34.847411   26946 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:34.847432   26946 pod_ready.go:82] duration metric: took 401.432801ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:34.847445   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.043014   26946 request.go:632] Waited for 195.507309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:14:35.043093   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:14:35.043102   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.043109   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.043117   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.046836   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.242781   26946 request.go:632] Waited for 195.218665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:35.242851   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:35.242856   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.242862   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.242866   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.246468   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.247353   26946 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:35.247380   26946 pod_ready.go:82] duration metric: took 399.923772ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.247393   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.442345   26946 request.go:632] Waited for 194.883869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:14:35.442516   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:14:35.442529   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.442541   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.442550   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.446031   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.642937   26946 request.go:632] Waited for 196.342972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:35.642985   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:35.642990   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.642997   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.643001   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.646624   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:35.647369   26946 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:35.647389   26946 pod_ready.go:82] duration metric: took 399.989175ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.647398   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:35.842485   26946 request.go:632] Waited for 195.020246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:14:35.842575   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:14:35.842586   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:35.842597   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:35.842605   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:35.845997   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.043063   26946 request.go:632] Waited for 196.343615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:36.043113   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:14:36.043119   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.043125   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.043131   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.046327   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.046783   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.046799   26946 pod_ready.go:82] duration metric: took 399.395226ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.046810   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.242936   26946 request.go:632] Waited for 196.062784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:14:36.243003   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:14:36.243024   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.243037   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.243046   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.246888   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.442803   26946 request.go:632] Waited for 195.27104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:36.442859   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:14:36.442867   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.442877   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.442888   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.446304   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.446972   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.447001   26946 pod_ready.go:82] duration metric: took 400.183775ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.447011   26946 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.642468   26946 request.go:632] Waited for 195.395201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:14:36.642532   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:14:36.642538   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.642545   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.642549   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.646175   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.842841   26946 request.go:632] Waited for 195.970164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:36.842911   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:14:36.842924   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.842938   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.842946   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.846452   26946 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:14:36.847134   26946 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:14:36.847153   26946 pod_ready.go:82] duration metric: took 400.136505ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:14:36.847163   26946 pod_ready.go:39] duration metric: took 5.201155018s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:14:36.847177   26946 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:14:36.847229   26946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:14:36.869184   26946 api_server.go:72] duration metric: took 21.512734614s to wait for apiserver process to appear ...
	I0930 11:14:36.869210   26946 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:14:36.869231   26946 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:14:36.875656   26946 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:14:36.875723   26946 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:14:36.875730   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:36.875741   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:36.875751   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:36.876680   26946 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:14:36.876763   26946 api_server.go:141] control plane version: v1.31.1
	I0930 11:14:36.876785   26946 api_server.go:131] duration metric: took 7.567961ms to wait for apiserver health ...
	I0930 11:14:36.876795   26946 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:14:37.042474   26946 request.go:632] Waited for 165.583212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.042549   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.042557   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.042568   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.042577   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.049247   26946 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:14:37.056036   26946 system_pods.go:59] 24 kube-system pods found
	I0930 11:14:37.056063   26946 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:14:37.056069   26946 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:14:37.056073   26946 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:14:37.056076   26946 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:14:37.056079   26946 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:14:37.056082   26946 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:14:37.056085   26946 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:14:37.056088   26946 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:14:37.056091   26946 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:14:37.056094   26946 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:14:37.056097   26946 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:14:37.056100   26946 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:14:37.056105   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:14:37.056108   26946 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:14:37.056111   26946 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:14:37.056115   26946 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:14:37.056120   26946 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:14:37.056151   26946 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:14:37.056164   26946 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:14:37.056169   26946 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:14:37.056177   26946 system_pods.go:61] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:14:37.056182   26946 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:14:37.056189   26946 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:14:37.056194   26946 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:14:37.056204   26946 system_pods.go:74] duration metric: took 179.399341ms to wait for pod list to return data ...
	I0930 11:14:37.056216   26946 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:14:37.242741   26946 request.go:632] Waited for 186.4192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:14:37.242795   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:14:37.242800   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.242807   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.242813   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.247153   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:37.247269   26946 default_sa.go:45] found service account: "default"
	I0930 11:14:37.247285   26946 default_sa.go:55] duration metric: took 191.060236ms for default service account to be created ...
	I0930 11:14:37.247292   26946 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:14:37.442756   26946 request.go:632] Waited for 195.39174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.442830   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:14:37.442840   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.442850   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.442861   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.450094   26946 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:14:37.457440   26946 system_pods.go:86] 24 kube-system pods found
	I0930 11:14:37.457477   26946 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:14:37.457485   26946 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:14:37.457491   26946 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:14:37.457497   26946 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:14:37.457506   26946 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:14:37.457512   26946 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:14:37.457518   26946 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:14:37.457524   26946 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:14:37.457530   26946 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:14:37.457538   26946 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:14:37.457547   26946 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:14:37.457553   26946 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:14:37.457562   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:14:37.457569   26946 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:14:37.457575   26946 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:14:37.457584   26946 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:14:37.457590   26946 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:14:37.457597   26946 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:14:37.457603   26946 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:14:37.457612   26946 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:14:37.457630   26946 system_pods.go:89] "kube-vip-ha-033260" [544867fe-5f73-4613-ab03-f0a8f8ce64e2] Running
	I0930 11:14:37.457637   26946 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:14:37.457643   26946 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:14:37.457648   26946 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:14:37.457657   26946 system_pods.go:126] duration metric: took 210.359061ms to wait for k8s-apps to be running ...
	I0930 11:14:37.457669   26946 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:14:37.457721   26946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:14:37.476929   26946 system_svc.go:56] duration metric: took 19.252575ms WaitForService to wait for kubelet
	I0930 11:14:37.476958   26946 kubeadm.go:582] duration metric: took 22.120515994s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:14:37.476982   26946 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:14:37.642377   26946 request.go:632] Waited for 165.309074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:14:37.642424   26946 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:14:37.642429   26946 round_trippers.go:469] Request Headers:
	I0930 11:14:37.642438   26946 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:14:37.642449   26946 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:14:37.646747   26946 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:14:37.647864   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647885   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647896   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647900   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647904   26946 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:14:37.647908   26946 node_conditions.go:123] node cpu capacity is 2
	I0930 11:14:37.647912   26946 node_conditions.go:105] duration metric: took 170.925329ms to run NodePressure ...
	I0930 11:14:37.647922   26946 start.go:241] waiting for startup goroutines ...
	I0930 11:14:37.647945   26946 start.go:255] writing updated cluster config ...
	I0930 11:14:37.648212   26946 ssh_runner.go:195] Run: rm -f paused
	I0930 11:14:37.699426   26946 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:14:37.701518   26946 out.go:177] * Done! kubectl is now configured to use "ha-033260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.734214338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695107734153369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc0a85c1-3d60-4c75-ba51-e433481e8ffc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.735295115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c2a64c6-2bc2-4b6c-b6b1-d24466808c61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.735375192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c2a64c6-2bc2-4b6c-b6b1-d24466808c61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.735754640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c2a64c6-2bc2-4b6c-b6b1-d24466808c61 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.798111683Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6499b9f-de6a-46c2-894d-3354de8d0ca9 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.798218502Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6499b9f-de6a-46c2-894d-3354de8d0ca9 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.800790697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e918c09-1538-4185-9225-f7ada917a70e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.801519356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695107801480893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e918c09-1538-4185-9225-f7ada917a70e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.802453458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cfd0d6d-ed11-4da0-8813-7aff22e9c636 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.802549890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cfd0d6d-ed11-4da0-8813-7aff22e9c636 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.802933477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cfd0d6d-ed11-4da0-8813-7aff22e9c636 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.860813270Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=302216a5-49a9-4239-9431-ae791bd0232f name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.860936228Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=302216a5-49a9-4239-9431-ae791bd0232f name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.862267912Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1782899-2731-45f5-ada4-e604c7257203 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.863185995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695107863069218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1782899-2731-45f5-ada4-e604c7257203 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.865007569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36087977-b3a8-471c-a0c0-c42ec501cba6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.865115775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36087977-b3a8-471c-a0c0-c42ec501cba6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.865527173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36087977-b3a8-471c-a0c0-c42ec501cba6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.917111837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8f1c3bb-8294-4491-bee4-8862293f3af7 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.917229916Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8f1c3bb-8294-4491-bee4-8862293f3af7 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.918808850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca9741ca-3993-4f94-bc87-3a6b2302f0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.919270758Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695107919247045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca9741ca-3993-4f94-bc87-3a6b2302f0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.920316985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd22c0b2-4fc6-404c-a53a-2026d015cb12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.920410509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd22c0b2-4fc6-404c-a53a-2026d015cb12 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:18:27 ha-033260 crio[660]: time="2024-09-30 11:18:27.921487047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727694880474406078,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738700499783,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f612e29e1b4eb533d1a232015bb5469c9324bbc7708670d4d8dd51b5a3607245,PodSandboxId:571ace347c86dd46d00d874681f602b39855eeddbadcef9282df123361d4bead,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727694738655314464,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727694738606901190,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95
d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17276947
26647937132,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727694726649930334,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727694715828520149,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727694713376562110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489,PodSandboxId:6bdfa517065574a1ed07663d90754c844f217f9666ba7d8956d982f55232ddab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727694713329220180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727694713298021814,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac,PodSandboxId:676d3fbaf3e6fcd911b9ae81364c8a5bf71bb8d8bc40e253e71a7eaca6a0ec90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727694713207263431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd22c0b2-4fc6-404c-a53a-2026d015cb12 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	970aed3b1f96b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e5a4e140afd6a       busybox-7dff88458-nbhwc
	856f46390ed07       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   ee2a6eb69b10a       coredns-7c65d6cfc9-kt87v
	f612e29e1b4eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   571ace347c86d       storage-provisioner
	2aac013f37bf9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   724d02dce7a0d       coredns-7c65d6cfc9-5frmm
	347597ebf9b20       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   b08b772dab41d       kube-proxy-mxvxr
	6cf899810e161       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   b2990036962da       kindnet-g94k6
	7a9e01197e5c6       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2bd722c6afa63       kube-vip-ha-033260
	aa8ecc81d0af2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   f789f882a4d3c       etcd-ha-033260
	e62c0a6cc031f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   6bdfa51706557       kube-controller-manager-ha-033260
	2435a21a0f6f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   fd27dbf29ee9b       kube-scheduler-ha-033260
	cd2027f0a04e1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   676d3fbaf3e6f       kube-apiserver-ha-033260
	
	
	==> coredns [2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7] <==
	[INFO] 10.244.1.2:53856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00078279s
	[INFO] 10.244.0.4:40457 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001984462s
	[INFO] 10.244.2.2:53822 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006986108s
	[INFO] 10.244.2.2:56668 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001677174s
	[INFO] 10.244.1.2:39538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172765s
	[INFO] 10.244.1.2:52635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028646205s
	[INFO] 10.244.1.2:41853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176274s
	[INFO] 10.244.1.2:35962 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170835s
	[INFO] 10.244.0.4:41550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130972s
	[INFO] 10.244.0.4:32938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173381s
	[INFO] 10.244.0.4:56409 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073902s
	[INFO] 10.244.2.2:58163 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268677s
	[INFO] 10.244.2.2:36365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010796s
	[INFO] 10.244.2.2:56656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115088s
	[INFO] 10.244.2.2:56306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139171s
	[INFO] 10.244.1.2:35824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200215s
	[INFO] 10.244.1.2:55897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096777s
	[INFO] 10.244.1.2:41692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109849s
	[INFO] 10.244.0.4:40290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106794s
	[INFO] 10.244.0.4:46779 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132069s
	[INFO] 10.244.1.2:51125 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201243s
	[INFO] 10.244.1.2:54698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184568s
	[INFO] 10.244.0.4:53882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193917s
	[INFO] 10.244.0.4:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121126s
	[INFO] 10.244.2.2:58238 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117978s
	
	
	==> coredns [856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0] <==
	[INFO] 10.244.1.2:57277 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000266561s
	[INFO] 10.244.1.2:48530 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000385853s
	[INFO] 10.244.0.4:37489 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002109336s
	[INFO] 10.244.0.4:53881 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132699s
	[INFO] 10.244.0.4:35131 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120989s
	[INFO] 10.244.0.4:53761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001344827s
	[INFO] 10.244.0.4:59481 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051804s
	[INFO] 10.244.2.2:39523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137336s
	[INFO] 10.244.2.2:35477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002190323s
	[INFO] 10.244.2.2:37515 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001525471s
	[INFO] 10.244.2.2:34201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119381s
	[INFO] 10.244.1.2:42886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230949s
	[INFO] 10.244.0.4:43156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079033s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010674s
	[INFO] 10.244.2.2:47730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245903s
	[INFO] 10.244.2.2:54559 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165285s
	[INFO] 10.244.2.2:56225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115859s
	[INFO] 10.244.2.2:54334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001069s
	[INFO] 10.244.1.2:43809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130742s
	[INFO] 10.244.1.2:56685 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199555s
	[INFO] 10.244.0.4:44188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154269s
	[INFO] 10.244.0.4:56530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138351s
	[INFO] 10.244.2.2:34814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138709s
	[INFO] 10.244.2.2:49549 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124443s
	[INFO] 10.244.2.2:35669 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100712s
	
	
	==> describe nodes <==
	Name:               ha-033260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:15:02 +0000   Mon, 30 Sep 2024 11:12:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-033260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 285e64dc8d10442694303513a400e333
	  System UUID:                285e64dc-8d10-4426-9430-3513a400e333
	  Boot ID:                    e1ab2d78-3004-455b-b8b3-86a48689299f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbhwc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 coredns-7c65d6cfc9-5frmm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m24s
	  kube-system                 coredns-7c65d6cfc9-kt87v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m24s
	  kube-system                 etcd-ha-033260                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m29s
	  kube-system                 kindnet-g94k6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-033260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-controller-manager-ha-033260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-proxy-mxvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-033260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-vip-ha-033260                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m21s  kube-proxy       
	  Normal  Starting                 6m29s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m29s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m29s  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m25s  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  NodeReady                6m10s  kubelet          Node ha-033260 status is now: NodeReady
	  Normal  RegisteredNode           5m25s  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           4m8s   node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	
	
	Name:               ha-033260-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:12:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:15:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 11:14:56 +0000   Mon, 30 Sep 2024 11:16:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-033260-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1504aa96b0e7414e83ec57ce754ea274
	  System UUID:                1504aa96-b0e7-414e-83ec-57ce754ea274
	  Boot ID:                    08e05cdc-874f-4f82-99d4-84bb26fd07ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-748nr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-033260-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-752cm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-033260-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-ha-033260-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-proxy-fckwn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-033260-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-vip-ha-033260-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  NodeNotReady             117s                   node-controller  Node ha-033260-m02 status is now: NodeNotReady
	
	
	Name:               ha-033260-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:14:41 +0000   Mon, 30 Sep 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-033260-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 581b37e2b76245bf813ddd1801a6b9a3
	  System UUID:                581b37e2-b762-45bf-813d-dd1801a6b9a3
	  Boot ID:                    92c7790b-7ee9-43e4-b1b8-fd69ae5fa989
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkczc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-033260-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m15s
	  kube-system                 kindnet-4rpgw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m17s
	  kube-system                 kube-apiserver-ha-033260-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-controller-manager-ha-033260-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-fctld                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-ha-033260-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-vip-ha-033260-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m17s)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	
	
	Name:               ha-033260-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:18:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:15:45 +0000   Mon, 30 Sep 2024 11:15:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-033260-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f7e5ab5969e49808de6a4938b82b604
	  System UUID:                3f7e5ab5-969e-4980-8de6-a4938b82b604
	  Boot ID:                    15a5a2bf-b69b-4b89-b5f2-f6529ae084b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kb2cp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-cr58q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-033260-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 11:11] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050905] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040385] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.839402] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.653040] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.597753] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.651623] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058580] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170861] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.144465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.293344] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.055212] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.356595] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.065791] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.315036] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.090322] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 11:12] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.137075] kauditd_printk_skb: 38 callbacks suppressed
	[Sep30 11:13] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8] <==
	{"level":"warn","ts":"2024-09-30T11:18:27.950896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.051271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.150866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.181930Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.191871Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.196782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.206911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.214040Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.221065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.224845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.228578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.235078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.242435Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.249143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.250467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.250682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.257872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.261312Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.325002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.335956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.348955Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.351893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.359903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.374711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-30T11:18:28.440934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"318ee90c3446d547","from":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:18:28 up 7 min,  0 users,  load average: 0.28, 0.17, 0.08
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346] <==
	I0930 11:17:57.863102       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:07.854593       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:18:07.854770       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:07.854951       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:18:07.854979       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:18:07.855034       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:18:07.855052       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:18:07.855106       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:18:07.855130       1 main.go:299] handling current node
	I0930 11:18:17.860759       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:18:17.860855       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:18:17.860991       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:18:17.861014       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:18:17.861065       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:18:17.861084       1 main.go:299] handling current node
	I0930 11:18:17.861114       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:18:17.861129       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:27.863894       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:18:27.864033       1 main.go:299] handling current node
	I0930 11:18:27.864062       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:18:27.864091       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:18:27.864337       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:18:27.864344       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:18:27.864408       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:18:27.864413       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac] <==
	I0930 11:11:58.463989       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0930 11:11:58.477865       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249]
	I0930 11:11:58.479372       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 11:11:58.487328       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 11:11:58.586099       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 11:11:59.517972       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 11:11:59.542879       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0930 11:11:59.558820       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 11:12:04.282712       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0930 11:12:04.376507       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0930 11:14:41.794861       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58556: use of closed network connection
	E0930 11:14:41.976585       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58584: use of closed network connection
	E0930 11:14:42.175263       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58602: use of closed network connection
	E0930 11:14:42.398453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58626: use of closed network connection
	E0930 11:14:42.598999       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58646: use of closed network connection
	E0930 11:14:42.786264       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58670: use of closed network connection
	E0930 11:14:42.985795       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58688: use of closed network connection
	E0930 11:14:43.164451       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58700: use of closed network connection
	E0930 11:14:43.352582       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58708: use of closed network connection
	E0930 11:14:43.634509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58726: use of closed network connection
	E0930 11:14:43.812335       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58746: use of closed network connection
	E0930 11:14:44.006684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58766: use of closed network connection
	E0930 11:14:44.194031       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58782: use of closed network connection
	E0930 11:14:44.561371       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58814: use of closed network connection
	W0930 11:16:08.485734       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.249]
	
	
	==> kube-controller-manager [e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489] <==
	I0930 11:15:14.593101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.593158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.605401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:14.879876       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:15.297330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:16.002721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.158455       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.429273       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:18.922721       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-033260-m04"
	I0930 11:15:18.922856       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:19.229459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:24.734460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:34.561602       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:34.561906       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:15:34.575771       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:35.966445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:15:45.204985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:16:30.993129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:30.994314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:16:31.023898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:31.050052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.150574ms"
	I0930 11:16:31.050219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.36µs"
	I0930 11:16:31.218479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:34.045967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:16:36.316239       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	
	
	==> kube-proxy [347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:12:06.949025       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:12:06.986064       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0930 11:12:06.986193       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:12:07.041171       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:12:07.041238       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:12:07.041262       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:12:07.044020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:12:07.044727       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:12:07.044757       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:12:07.047853       1 config.go:199] "Starting service config controller"
	I0930 11:12:07.048187       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:12:07.048613       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:12:07.048700       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:12:07.051971       1 config.go:328] "Starting node config controller"
	I0930 11:12:07.052033       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:12:07.148982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:12:07.149026       1 shared_informer.go:320] Caches are synced for service config
	I0930 11:12:07.152927       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2] <==
	I0930 11:11:59.743507       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:14:38.641000       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkczc\": pod busybox-7dff88458-rkczc is already assigned to node \"ha-033260-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rkczc" node="ha-033260-m03"
	E0930 11:14:38.642588       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 12532e14-b4c0-4c7d-ab93-e96698fbc986(default/busybox-7dff88458-rkczc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-rkczc"
	E0930 11:14:38.642720       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rkczc\": pod busybox-7dff88458-rkczc is already assigned to node \"ha-033260-m03\"" pod="default/busybox-7dff88458-rkczc"
	I0930 11:14:38.642772       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-rkczc" node="ha-033260-m03"
	E0930 11:14:38.700019       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nbhwc\": pod busybox-7dff88458-nbhwc is already assigned to node \"ha-033260\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nbhwc" node="ha-033260"
	E0930 11:14:38.700408       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e62e1e44-3723-496c-85a3-7a79e9c8264b(default/busybox-7dff88458-nbhwc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-nbhwc"
	E0930 11:14:38.700579       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nbhwc\": pod busybox-7dff88458-nbhwc is already assigned to node \"ha-033260\"" pod="default/busybox-7dff88458-nbhwc"
	I0930 11:14:38.700685       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nbhwc" node="ha-033260"
	E0930 11:14:38.701396       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-748nr\": pod busybox-7dff88458-748nr is already assigned to node \"ha-033260-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-748nr" node="ha-033260-m02"
	E0930 11:14:38.701487       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 004c0140-b81f-4e7b-aa0d-0aa6f7403351(default/busybox-7dff88458-748nr) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-748nr"
	E0930 11:14:38.701528       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-748nr\": pod busybox-7dff88458-748nr is already assigned to node \"ha-033260-m02\"" pod="default/busybox-7dff88458-748nr"
	I0930 11:14:38.701566       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-748nr" node="ha-033260-m02"
	E0930 11:15:14.650435       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mkbm9\": pod kube-proxy-mkbm9 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mkbm9" node="ha-033260-m04"
	E0930 11:15:14.650543       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mkbm9\": pod kube-proxy-mkbm9 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-mkbm9"
	E0930 11:15:14.687957       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.688017       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c071322f-794b-4d6f-a33a-92077352ef5d(kube-system/kindnet-kb2cp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kb2cp"
	E0930 11:15:14.688032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-kb2cp"
	I0930 11:15:14.688047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.701899       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nbts6" node="ha-033260-m04"
	E0930 11:15:14.702003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-nbts6"
	E0930 11:15:14.702565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:15:14.705542       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2de7434-03f1-4bbc-ab62-3101483908c1(kube-system/kube-proxy-cr58q) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-cr58q"
	E0930 11:15:14.705602       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-cr58q"
	I0930 11:15:14.705671       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	
	
	==> kubelet <==
	Sep 30 11:16:59 ha-033260 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:16:59 ha-033260 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:16:59 ha-033260 kubelet[1307]: E0930 11:16:59.603405    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695019602992032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:16:59 ha-033260 kubelet[1307]: E0930 11:16:59.603474    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695019602992032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:09 ha-033260 kubelet[1307]: E0930 11:17:09.605544    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695029605156885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:09 ha-033260 kubelet[1307]: E0930 11:17:09.605573    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695029605156885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:19 ha-033260 kubelet[1307]: E0930 11:17:19.607869    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695039607317316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:19 ha-033260 kubelet[1307]: E0930 11:17:19.608153    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695039607317316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:29 ha-033260 kubelet[1307]: E0930 11:17:29.611241    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695049610444192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:29 ha-033260 kubelet[1307]: E0930 11:17:29.611290    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695049610444192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:39 ha-033260 kubelet[1307]: E0930 11:17:39.612829    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695059612275436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:39 ha-033260 kubelet[1307]: E0930 11:17:39.613366    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695059612275436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:49 ha-033260 kubelet[1307]: E0930 11:17:49.615817    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695069615300757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:49 ha-033260 kubelet[1307]: E0930 11:17:49.616359    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695069615300757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.469234    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:17:59 ha-033260 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:17:59 ha-033260 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.620277    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695079619430930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:17:59 ha-033260 kubelet[1307]: E0930 11:17:59.620330    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695079619430930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:09 ha-033260 kubelet[1307]: E0930 11:18:09.622386    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695089621956899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:09 ha-033260 kubelet[1307]: E0930 11:18:09.622824    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695089621956899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:19 ha-033260 kubelet[1307]: E0930 11:18:19.628964    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695099627068358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:18:19 ha-033260 kubelet[1307]: E0930 11:18:19.629013    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695099627068358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:261: (dbg) Run:  kubectl --context ha-033260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-033260 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-033260 -v=7 --alsologtostderr
E0930 11:20:18.064254   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-033260 -v=7 --alsologtostderr: exit status 82 (2m1.943143197s)

                                                
                                                
-- stdout --
	* Stopping node "ha-033260-m04"  ...
	* Stopping node "ha-033260-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:18:33.466970   32516 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:18:33.467276   32516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:18:33.467286   32516 out.go:358] Setting ErrFile to fd 2...
	I0930 11:18:33.467291   32516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:18:33.467469   32516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:18:33.467757   32516 out.go:352] Setting JSON to false
	I0930 11:18:33.467868   32516 mustload.go:65] Loading cluster: ha-033260
	I0930 11:18:33.468550   32516 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:18:33.468699   32516 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:18:33.468952   32516 mustload.go:65] Loading cluster: ha-033260
	I0930 11:18:33.469182   32516 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:18:33.469241   32516 stop.go:39] StopHost: ha-033260-m04
	I0930 11:18:33.469843   32516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:18:33.469897   32516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:18:33.484931   32516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
	I0930 11:18:33.485395   32516 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:18:33.485981   32516 main.go:141] libmachine: Using API Version  1
	I0930 11:18:33.486006   32516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:18:33.486415   32516 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:18:33.488886   32516 out.go:177] * Stopping node "ha-033260-m04"  ...
	I0930 11:18:33.490261   32516 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 11:18:33.490310   32516 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:18:33.490570   32516 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 11:18:33.490595   32516 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:18:33.493418   32516 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:18:33.493918   32516 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:15:00 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:18:33.493934   32516 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:18:33.494086   32516 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:18:33.494270   32516 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:18:33.494401   32516 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:18:33.494537   32516 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:18:33.587592   32516 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 11:18:33.642568   32516 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 11:18:33.697450   32516 main.go:141] libmachine: Stopping "ha-033260-m04"...
	I0930 11:18:33.697490   32516 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:18:33.699071   32516 main.go:141] libmachine: (ha-033260-m04) Calling .Stop
	I0930 11:18:33.702561   32516 main.go:141] libmachine: (ha-033260-m04) Waiting for machine to stop 0/120
	I0930 11:18:34.939794   32516 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:18:34.941263   32516 main.go:141] libmachine: Machine "ha-033260-m04" was stopped.
	I0930 11:18:34.941283   32516 stop.go:75] duration metric: took 1.451031367s to stop
	I0930 11:18:34.941305   32516 stop.go:39] StopHost: ha-033260-m03
	I0930 11:18:34.941660   32516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:18:34.941711   32516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:18:34.956336   32516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
	I0930 11:18:34.956726   32516 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:18:34.957170   32516 main.go:141] libmachine: Using API Version  1
	I0930 11:18:34.957188   32516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:18:34.957501   32516 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:18:34.959659   32516 out.go:177] * Stopping node "ha-033260-m03"  ...
	I0930 11:18:34.961047   32516 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 11:18:34.961077   32516 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:18:34.961276   32516 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 11:18:34.961303   32516 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:18:34.964337   32516 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:18:34.964836   32516 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:13:33 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:18:34.964865   32516 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:18:34.965002   32516 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:18:34.965143   32516 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:18:34.965302   32516 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:18:34.965457   32516 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:18:35.051497   32516 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 11:18:35.106082   32516 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 11:18:35.161126   32516 main.go:141] libmachine: Stopping "ha-033260-m03"...
	I0930 11:18:35.161159   32516 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:18:35.162766   32516 main.go:141] libmachine: (ha-033260-m03) Calling .Stop
	I0930 11:18:35.166220   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 0/120
	I0930 11:18:36.168404   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 1/120
	I0930 11:18:37.169741   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 2/120
	I0930 11:18:38.171122   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 3/120
	I0930 11:18:39.172374   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 4/120
	I0930 11:18:40.174564   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 5/120
	I0930 11:18:41.176290   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 6/120
	I0930 11:18:42.177799   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 7/120
	I0930 11:18:43.179078   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 8/120
	I0930 11:18:44.180705   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 9/120
	I0930 11:18:45.182984   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 10/120
	I0930 11:18:46.184520   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 11/120
	I0930 11:18:47.186334   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 12/120
	I0930 11:18:48.187907   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 13/120
	I0930 11:18:49.189740   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 14/120
	I0930 11:18:50.191933   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 15/120
	I0930 11:18:51.193604   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 16/120
	I0930 11:18:52.195218   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 17/120
	I0930 11:18:53.196806   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 18/120
	I0930 11:18:54.198479   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 19/120
	I0930 11:18:55.200500   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 20/120
	I0930 11:18:56.202715   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 21/120
	I0930 11:18:57.204507   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 22/120
	I0930 11:18:58.206209   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 23/120
	I0930 11:18:59.207892   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 24/120
	I0930 11:19:00.210127   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 25/120
	I0930 11:19:01.211634   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 26/120
	I0930 11:19:02.213420   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 27/120
	I0930 11:19:03.214946   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 28/120
	I0930 11:19:04.216505   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 29/120
	I0930 11:19:05.218266   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 30/120
	I0930 11:19:06.219640   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 31/120
	I0930 11:19:07.221485   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 32/120
	I0930 11:19:08.222878   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 33/120
	I0930 11:19:09.224401   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 34/120
	I0930 11:19:10.225686   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 35/120
	I0930 11:19:11.227210   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 36/120
	I0930 11:19:12.228454   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 37/120
	I0930 11:19:13.229874   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 38/120
	I0930 11:19:14.231212   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 39/120
	I0930 11:19:15.232943   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 40/120
	I0930 11:19:16.234548   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 41/120
	I0930 11:19:17.235813   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 42/120
	I0930 11:19:18.237169   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 43/120
	I0930 11:19:19.238496   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 44/120
	I0930 11:19:20.240333   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 45/120
	I0930 11:19:21.241793   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 46/120
	I0930 11:19:22.243156   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 47/120
	I0930 11:19:23.244563   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 48/120
	I0930 11:19:24.245925   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 49/120
	I0930 11:19:25.247624   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 50/120
	I0930 11:19:26.249194   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 51/120
	I0930 11:19:27.250745   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 52/120
	I0930 11:19:28.252241   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 53/120
	I0930 11:19:29.253894   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 54/120
	I0930 11:19:30.255551   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 55/120
	I0930 11:19:31.257127   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 56/120
	I0930 11:19:32.258769   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 57/120
	I0930 11:19:33.260397   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 58/120
	I0930 11:19:34.262339   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 59/120
	I0930 11:19:35.263997   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 60/120
	I0930 11:19:36.265365   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 61/120
	I0930 11:19:37.266762   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 62/120
	I0930 11:19:38.268250   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 63/120
	I0930 11:19:39.269723   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 64/120
	I0930 11:19:40.271525   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 65/120
	I0930 11:19:41.272847   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 66/120
	I0930 11:19:42.274279   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 67/120
	I0930 11:19:43.275903   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 68/120
	I0930 11:19:44.277372   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 69/120
	I0930 11:19:45.279387   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 70/120
	I0930 11:19:46.281046   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 71/120
	I0930 11:19:47.282576   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 72/120
	I0930 11:19:48.284011   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 73/120
	I0930 11:19:49.285463   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 74/120
	I0930 11:19:50.287868   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 75/120
	I0930 11:19:51.289288   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 76/120
	I0930 11:19:52.290791   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 77/120
	I0930 11:19:53.292424   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 78/120
	I0930 11:19:54.294129   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 79/120
	I0930 11:19:55.296360   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 80/120
	I0930 11:19:56.297825   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 81/120
	I0930 11:19:57.300139   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 82/120
	I0930 11:19:58.301600   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 83/120
	I0930 11:19:59.303853   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 84/120
	I0930 11:20:00.305699   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 85/120
	I0930 11:20:01.307258   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 86/120
	I0930 11:20:02.308653   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 87/120
	I0930 11:20:03.310202   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 88/120
	I0930 11:20:04.311702   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 89/120
	I0930 11:20:05.313465   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 90/120
	I0930 11:20:06.314794   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 91/120
	I0930 11:20:07.316368   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 92/120
	I0930 11:20:08.317646   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 93/120
	I0930 11:20:09.319256   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 94/120
	I0930 11:20:10.320943   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 95/120
	I0930 11:20:11.323221   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 96/120
	I0930 11:20:12.324667   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 97/120
	I0930 11:20:13.326359   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 98/120
	I0930 11:20:14.327629   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 99/120
	I0930 11:20:15.329175   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 100/120
	I0930 11:20:16.330773   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 101/120
	I0930 11:20:17.332102   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 102/120
	I0930 11:20:18.333568   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 103/120
	I0930 11:20:19.334809   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 104/120
	I0930 11:20:20.336153   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 105/120
	I0930 11:20:21.337453   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 106/120
	I0930 11:20:22.339222   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 107/120
	I0930 11:20:23.340744   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 108/120
	I0930 11:20:24.341982   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 109/120
	I0930 11:20:25.343902   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 110/120
	I0930 11:20:26.345921   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 111/120
	I0930 11:20:27.347407   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 112/120
	I0930 11:20:28.349172   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 113/120
	I0930 11:20:29.350872   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 114/120
	I0930 11:20:30.352910   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 115/120
	I0930 11:20:31.354646   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 116/120
	I0930 11:20:32.356024   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 117/120
	I0930 11:20:33.357395   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 118/120
	I0930 11:20:34.358825   32516 main.go:141] libmachine: (ha-033260-m03) Waiting for machine to stop 119/120
	I0930 11:20:35.359753   32516 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 11:20:35.359810   32516 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0930 11:20:35.361727   32516 out.go:201] 
	W0930 11:20:35.363663   32516 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0930 11:20:35.363678   32516 out.go:270] * 
	* 
	W0930 11:20:35.365807   32516 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 11:20:35.366858   32516 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-033260 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-033260 --wait=true -v=7 --alsologtostderr
E0930 11:20:45.768127   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-033260 --wait=true -v=7 --alsologtostderr: exit status 80 (1m59.714873489s)

                                                
                                                
-- stdout --
	* [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	* Updating the running kvm2 "ha-033260" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	* Restarting existing kvm2 VM for "ha-033260-m02" ...
	* Updating the running kvm2 "ha-033260-m02" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:20:35.412602   33043 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:20:35.412849   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.412858   33043 out.go:358] Setting ErrFile to fd 2...
	I0930 11:20:35.412863   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.413024   33043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:20:35.413552   33043 out.go:352] Setting JSON to false
	I0930 11:20:35.414491   33043 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3782,"bootTime":1727691453,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:20:35.414596   33043 start.go:139] virtualization: kvm guest
	I0930 11:20:35.416608   33043 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:20:35.417763   33043 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:20:35.417777   33043 notify.go:220] Checking for updates...
	I0930 11:20:35.420438   33043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:20:35.421852   33043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:20:35.423268   33043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:20:35.424519   33043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:20:35.425736   33043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:20:35.427423   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:35.427536   33043 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:20:35.428064   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.428107   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.443112   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0930 11:20:35.443682   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.444204   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.444222   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.444550   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.444728   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.482622   33043 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:20:35.483910   33043 start.go:297] selected driver: kvm2
	I0930 11:20:35.483927   33043 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.484109   33043 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:20:35.484423   33043 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.484521   33043 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:20:35.500176   33043 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:20:35.500994   33043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:20:35.501027   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:20:35.501074   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:20:35.501131   33043 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.501263   33043 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.503184   33043 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:20:35.504511   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:20:35.504563   33043 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:20:35.504573   33043 cache.go:56] Caching tarball of preloaded images
	I0930 11:20:35.504731   33043 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:20:35.504748   33043 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:20:35.504904   33043 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:20:35.505134   33043 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:20:35.505183   33043 start.go:364] duration metric: took 27.274µs to acquireMachinesLock for "ha-033260"
	I0930 11:20:35.505203   33043 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:20:35.505236   33043 fix.go:54] fixHost starting: 
	I0930 11:20:35.505507   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.505539   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.520330   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0930 11:20:35.520763   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.521246   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.521267   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.521605   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.521835   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.521965   33043 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:20:35.523567   33043 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:20:35.523602   33043 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:20:35.525750   33043 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:20:35.527061   33043 machine.go:93] provisionDockerMachine start ...
	I0930 11:20:35.527088   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.527326   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.530036   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530579   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.530600   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530780   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.530958   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531111   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.531336   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.531561   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.531576   33043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:20:35.649365   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.649400   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649690   33043 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:20:35.649710   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649919   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.652623   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653056   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.653103   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653299   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.653488   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653688   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653834   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.653997   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.654241   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.654260   33043 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:20:35.785013   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.785047   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.788437   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.788960   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.788993   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.789200   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.789404   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789576   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789719   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.789879   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.790046   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.790061   33043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:20:35.902798   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:20:35.902835   33043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:20:35.902868   33043 buildroot.go:174] setting up certificates
	I0930 11:20:35.902885   33043 provision.go:84] configureAuth start
	I0930 11:20:35.902905   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.903213   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:20:35.905874   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906221   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.906243   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906402   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.908695   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909090   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.909113   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909309   33043 provision.go:143] copyHostCerts
	I0930 11:20:35.909340   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909394   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:20:35.909406   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909486   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:20:35.909601   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909636   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:20:35.909647   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909686   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:20:35.909766   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909790   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:20:35.909794   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909825   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:20:35.909903   33043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:20:35.980635   33043 provision.go:177] copyRemoteCerts
	I0930 11:20:35.980685   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:20:35.980706   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.983637   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.983980   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.983998   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.984309   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.984502   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.984684   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.984848   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:20:36.072953   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:20:36.073023   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:20:36.102423   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:20:36.102509   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:20:36.135815   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:20:36.135913   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:20:36.166508   33043 provision.go:87] duration metric: took 263.6024ms to configureAuth
	I0930 11:20:36.166535   33043 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:20:36.166819   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:36.166934   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:36.169482   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.169896   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:36.169922   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.170125   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:36.170342   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170514   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170642   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:36.170792   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:36.170996   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:36.171017   33043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:22:06.983121   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:22:06.983146   33043 machine.go:96] duration metric: took 1m31.456067098s to provisionDockerMachine
	I0930 11:22:06.983157   33043 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:22:06.983167   33043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:22:06.983186   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:06.983540   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:22:06.983587   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:06.986877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987470   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:06.987488   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987723   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:06.987912   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:06.988044   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:06.988157   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.073758   33043 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:22:07.078469   33043 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:22:07.078512   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:22:07.078605   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:22:07.078699   33043 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:22:07.078713   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:22:07.078804   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:22:07.089555   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:07.116194   33043 start.go:296] duration metric: took 133.023032ms for postStartSetup
	I0930 11:22:07.116254   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.116551   33043 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:22:07.116577   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.119461   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.119823   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.119858   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.120010   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.120203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.120359   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.120470   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	W0930 11:22:07.204626   33043 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0930 11:22:07.204654   33043 fix.go:56] duration metric: took 1m31.699418607s for fixHost
	I0930 11:22:07.204673   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.207768   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208205   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.208236   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208426   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.208670   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208815   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208920   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.209074   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:22:07.209303   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:22:07.209317   33043 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:22:07.318615   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695327.281322937
	
	I0930 11:22:07.318635   33043 fix.go:216] guest clock: 1727695327.281322937
	I0930 11:22:07.318652   33043 fix.go:229] Guest: 2024-09-30 11:22:07.281322937 +0000 UTC Remote: 2024-09-30 11:22:07.204660834 +0000 UTC m=+91.828672682 (delta=76.662103ms)
	I0930 11:22:07.318687   33043 fix.go:200] guest clock delta is within tolerance: 76.662103ms
	I0930 11:22:07.318695   33043 start.go:83] releasing machines lock for "ha-033260", held for 1m31.813499324s
	I0930 11:22:07.318717   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.318982   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:07.321877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322412   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.322444   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322594   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323100   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323285   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323407   33043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:22:07.323451   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.323488   33043 ssh_runner.go:195] Run: cat /version.json
	I0930 11:22:07.323513   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.326064   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326202   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326521   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326548   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326576   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326591   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326637   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326826   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.326854   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326968   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327118   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.327178   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.327254   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327385   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.415864   33043 ssh_runner.go:195] Run: systemctl --version
	I0930 11:22:07.451247   33043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:22:07.632639   33043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:22:07.641688   33043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:22:07.641764   33043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:22:07.651983   33043 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:22:07.652031   33043 start.go:495] detecting cgroup driver to use...
	I0930 11:22:07.652103   33043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:22:07.669168   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:22:07.684823   33043 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:22:07.684912   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:22:07.701483   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:22:07.716518   33043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:22:07.896967   33043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:22:08.050310   33043 docker.go:233] disabling docker service ...
	I0930 11:22:08.050371   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:22:08.068482   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:22:08.084459   33043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:22:08.236128   33043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:22:08.390802   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:22:08.406104   33043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:22:08.427375   33043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:22:08.427446   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.438743   33043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:22:08.438847   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.452067   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.463557   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.475079   33043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:22:08.487336   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.498829   33043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.511516   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.523240   33043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:22:08.533544   33043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:22:08.544108   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:08.698933   33043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:22:09.935253   33043 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.236281542s)
	I0930 11:22:09.935282   33043 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:22:09.935334   33043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:22:09.940570   33043 start.go:563] Will wait 60s for crictl version
	I0930 11:22:09.940624   33043 ssh_runner.go:195] Run: which crictl
	I0930 11:22:09.945362   33043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:22:09.989303   33043 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:22:09.989390   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.021074   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.054999   33043 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:22:10.056435   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:10.059297   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.059696   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:10.059727   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.060000   33043 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:22:10.065633   33043 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:22:10.065825   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:22:10.065888   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.114243   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.114265   33043 crio.go:433] Images already preloaded, skipping extraction
	I0930 11:22:10.114317   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.150653   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.150674   33043 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:22:10.150709   33043 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:22:10.150850   33043 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:22:10.150941   33043 ssh_runner.go:195] Run: crio config
	I0930 11:22:10.206136   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:22:10.206155   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:22:10.206167   33043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:22:10.206190   33043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:22:10.206332   33043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:22:10.206353   33043 kube-vip.go:115] generating kube-vip config ...
	I0930 11:22:10.206392   33043 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:22:10.219053   33043 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:22:10.219173   33043 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:22:10.219254   33043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:22:10.229908   33043 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:22:10.230004   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:22:10.240121   33043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:22:10.258330   33043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:22:10.275729   33043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:22:10.294239   33043 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:22:10.312810   33043 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:22:10.318284   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:10.474551   33043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:22:10.491027   33043 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:22:10.491051   33043 certs.go:194] generating shared ca certs ...
	I0930 11:22:10.491069   33043 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.491243   33043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:22:10.491283   33043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:22:10.491302   33043 certs.go:256] generating profile certs ...
	I0930 11:22:10.491378   33043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:22:10.491404   33043 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:22:10.491428   33043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:22:10.563349   33043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 ...
	I0930 11:22:10.563384   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8: {Name:mkee749054ef5d747ecd6803933a55d7df9028fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563569   33043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 ...
	I0930 11:22:10.563581   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8: {Name:mk9e9a7e147c3768475898ec896a945ed1a2ca5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563657   33043 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:22:10.563846   33043 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:22:10.563993   33043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:22:10.564009   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:22:10.564024   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:22:10.564040   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:22:10.564063   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:22:10.564079   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:22:10.564094   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:22:10.564108   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:22:10.564123   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:22:10.564204   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:22:10.564237   33043 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:22:10.564251   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:22:10.564279   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:22:10.564308   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:22:10.564350   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:22:10.564409   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:10.564444   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.564467   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.564488   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.565081   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:22:10.592675   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:22:10.618318   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:22:10.644512   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:22:10.671272   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:22:10.697564   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:22:10.722738   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:22:10.749628   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:22:10.776815   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:22:10.803425   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:22:10.831267   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:22:10.857397   33043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:22:10.875093   33043 ssh_runner.go:195] Run: openssl version
	I0930 11:22:10.881398   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:22:10.892677   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897320   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897366   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.903164   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:22:10.912882   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:22:10.923908   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928941   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928987   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.935759   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:22:10.946855   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:22:10.958480   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963160   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963215   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.969693   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:22:10.979808   33043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:22:10.984752   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:22:10.990728   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:22:10.996688   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:22:11.002573   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:22:11.008376   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:22:11.014247   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:22:11.020178   33043 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:22:11.020295   33043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:22:11.020338   33043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:22:11.063287   33043 cri.go:89] found id: "67ee9c49babe93d74d8ee81ea2f17248f722c6211abed7e9723015bda428c4e0"
	I0930 11:22:11.063310   33043 cri.go:89] found id: "e591b4f157ddf0eb6b48bdb31431c92024f32bbe7aa2f96293514fffeed045fe"
	I0930 11:22:11.063314   33043 cri.go:89] found id: "9dc9be1c78f6ce470cf1031b617b8d94b60138c3c1bd738c2bafa9f07db57573"
	I0930 11:22:11.063317   33043 cri.go:89] found id: "5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c"
	I0930 11:22:11.063320   33043 cri.go:89] found id: "856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0"
	I0930 11:22:11.063323   33043 cri.go:89] found id: "2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7"
	I0930 11:22:11.063325   33043 cri.go:89] found id: "347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c"
	I0930 11:22:11.063328   33043 cri.go:89] found id: "6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346"
	I0930 11:22:11.063330   33043 cri.go:89] found id: "7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee"
	I0930 11:22:11.063334   33043 cri.go:89] found id: "aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8"
	I0930 11:22:11.063341   33043 cri.go:89] found id: "e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489"
	I0930 11:22:11.063343   33043 cri.go:89] found id: "2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2"
	I0930 11:22:11.063346   33043 cri.go:89] found id: "cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac"
	I0930 11:22:11.063349   33043 cri.go:89] found id: ""
	I0930 11:22:11.063386   33043 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-033260 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-033260
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260: exit status 2 (15.164795572s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.585360849s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260 -v=7                                                           | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-033260 -v=7                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:20:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:20:35.412602   33043 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:20:35.412849   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.412858   33043 out.go:358] Setting ErrFile to fd 2...
	I0930 11:20:35.412863   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.413024   33043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:20:35.413552   33043 out.go:352] Setting JSON to false
	I0930 11:20:35.414491   33043 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3782,"bootTime":1727691453,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:20:35.414596   33043 start.go:139] virtualization: kvm guest
	I0930 11:20:35.416608   33043 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:20:35.417763   33043 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:20:35.417777   33043 notify.go:220] Checking for updates...
	I0930 11:20:35.420438   33043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:20:35.421852   33043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:20:35.423268   33043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:20:35.424519   33043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:20:35.425736   33043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:20:35.427423   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:35.427536   33043 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:20:35.428064   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.428107   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.443112   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0930 11:20:35.443682   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.444204   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.444222   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.444550   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.444728   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.482622   33043 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:20:35.483910   33043 start.go:297] selected driver: kvm2
	I0930 11:20:35.483927   33043 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.484109   33043 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:20:35.484423   33043 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.484521   33043 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:20:35.500176   33043 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:20:35.500994   33043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:20:35.501027   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:20:35.501074   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:20:35.501131   33043 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.501263   33043 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.503184   33043 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:20:35.504511   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:20:35.504563   33043 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:20:35.504573   33043 cache.go:56] Caching tarball of preloaded images
	I0930 11:20:35.504731   33043 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:20:35.504748   33043 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:20:35.504904   33043 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:20:35.505134   33043 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:20:35.505183   33043 start.go:364] duration metric: took 27.274µs to acquireMachinesLock for "ha-033260"
	I0930 11:20:35.505203   33043 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:20:35.505236   33043 fix.go:54] fixHost starting: 
	I0930 11:20:35.505507   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.505539   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.520330   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0930 11:20:35.520763   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.521246   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.521267   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.521605   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.521835   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.521965   33043 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:20:35.523567   33043 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:20:35.523602   33043 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:20:35.525750   33043 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:20:35.527061   33043 machine.go:93] provisionDockerMachine start ...
	I0930 11:20:35.527088   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.527326   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.530036   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530579   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.530600   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530780   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.530958   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531111   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.531336   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.531561   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.531576   33043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:20:35.649365   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.649400   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649690   33043 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:20:35.649710   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649919   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.652623   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653056   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.653103   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653299   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.653488   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653688   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653834   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.653997   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.654241   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.654260   33043 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:20:35.785013   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.785047   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.788437   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.788960   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.788993   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.789200   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.789404   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789576   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789719   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.789879   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.790046   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.790061   33043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:20:35.902798   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:20:35.902835   33043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:20:35.902868   33043 buildroot.go:174] setting up certificates
	I0930 11:20:35.902885   33043 provision.go:84] configureAuth start
	I0930 11:20:35.902905   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.903213   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:20:35.905874   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906221   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.906243   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906402   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.908695   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909090   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.909113   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909309   33043 provision.go:143] copyHostCerts
	I0930 11:20:35.909340   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909394   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:20:35.909406   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909486   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:20:35.909601   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909636   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:20:35.909647   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909686   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:20:35.909766   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909790   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:20:35.909794   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909825   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:20:35.909903   33043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:20:35.980635   33043 provision.go:177] copyRemoteCerts
	I0930 11:20:35.980685   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:20:35.980706   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.983637   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.983980   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.983998   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.984309   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.984502   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.984684   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.984848   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:20:36.072953   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:20:36.073023   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:20:36.102423   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:20:36.102509   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:20:36.135815   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:20:36.135913   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:20:36.166508   33043 provision.go:87] duration metric: took 263.6024ms to configureAuth
	I0930 11:20:36.166535   33043 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:20:36.166819   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:36.166934   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:36.169482   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.169896   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:36.169922   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.170125   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:36.170342   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170514   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170642   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:36.170792   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:36.170996   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:36.171017   33043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:22:06.983121   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:22:06.983146   33043 machine.go:96] duration metric: took 1m31.456067098s to provisionDockerMachine
	I0930 11:22:06.983157   33043 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:22:06.983167   33043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:22:06.983186   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:06.983540   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:22:06.983587   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:06.986877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987470   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:06.987488   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987723   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:06.987912   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:06.988044   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:06.988157   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.073758   33043 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:22:07.078469   33043 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:22:07.078512   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:22:07.078605   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:22:07.078699   33043 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:22:07.078713   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:22:07.078804   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:22:07.089555   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:07.116194   33043 start.go:296] duration metric: took 133.023032ms for postStartSetup
	I0930 11:22:07.116254   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.116551   33043 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:22:07.116577   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.119461   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.119823   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.119858   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.120010   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.120203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.120359   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.120470   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	W0930 11:22:07.204626   33043 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0930 11:22:07.204654   33043 fix.go:56] duration metric: took 1m31.699418607s for fixHost
	I0930 11:22:07.204673   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.207768   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208205   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.208236   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208426   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.208670   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208815   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208920   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.209074   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:22:07.209303   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:22:07.209317   33043 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:22:07.318615   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695327.281322937
	
	I0930 11:22:07.318635   33043 fix.go:216] guest clock: 1727695327.281322937
	I0930 11:22:07.318652   33043 fix.go:229] Guest: 2024-09-30 11:22:07.281322937 +0000 UTC Remote: 2024-09-30 11:22:07.204660834 +0000 UTC m=+91.828672682 (delta=76.662103ms)
	I0930 11:22:07.318687   33043 fix.go:200] guest clock delta is within tolerance: 76.662103ms
	I0930 11:22:07.318695   33043 start.go:83] releasing machines lock for "ha-033260", held for 1m31.813499324s
	I0930 11:22:07.318717   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.318982   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:07.321877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322412   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.322444   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322594   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323100   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323285   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323407   33043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:22:07.323451   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.323488   33043 ssh_runner.go:195] Run: cat /version.json
	I0930 11:22:07.323513   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.326064   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326202   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326521   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326548   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326576   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326591   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326637   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326826   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.326854   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326968   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327118   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.327178   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.327254   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327385   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.415864   33043 ssh_runner.go:195] Run: systemctl --version
	I0930 11:22:07.451247   33043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:22:07.632639   33043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:22:07.641688   33043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:22:07.641764   33043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:22:07.651983   33043 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:22:07.652031   33043 start.go:495] detecting cgroup driver to use...
	I0930 11:22:07.652103   33043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:22:07.669168   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:22:07.684823   33043 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:22:07.684912   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:22:07.701483   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:22:07.716518   33043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:22:07.896967   33043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:22:08.050310   33043 docker.go:233] disabling docker service ...
	I0930 11:22:08.050371   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:22:08.068482   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:22:08.084459   33043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:22:08.236128   33043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:22:08.390802   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:22:08.406104   33043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:22:08.427375   33043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:22:08.427446   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.438743   33043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:22:08.438847   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.452067   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.463557   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.475079   33043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:22:08.487336   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.498829   33043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.511516   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.523240   33043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:22:08.533544   33043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:22:08.544108   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:08.698933   33043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:22:09.935253   33043 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.236281542s)
	I0930 11:22:09.935282   33043 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:22:09.935334   33043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:22:09.940570   33043 start.go:563] Will wait 60s for crictl version
	I0930 11:22:09.940624   33043 ssh_runner.go:195] Run: which crictl
	I0930 11:22:09.945362   33043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:22:09.989303   33043 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:22:09.989390   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.021074   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.054999   33043 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:22:10.056435   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:10.059297   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.059696   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:10.059727   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.060000   33043 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:22:10.065633   33043 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:22:10.065825   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:22:10.065888   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.114243   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.114265   33043 crio.go:433] Images already preloaded, skipping extraction
	I0930 11:22:10.114317   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.150653   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.150674   33043 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:22:10.150709   33043 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:22:10.150850   33043 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:22:10.150941   33043 ssh_runner.go:195] Run: crio config
	I0930 11:22:10.206136   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:22:10.206155   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:22:10.206167   33043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:22:10.206190   33043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:22:10.206332   33043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:22:10.206353   33043 kube-vip.go:115] generating kube-vip config ...
	I0930 11:22:10.206392   33043 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:22:10.219053   33043 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:22:10.219173   33043 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:22:10.219254   33043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:22:10.229908   33043 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:22:10.230004   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:22:10.240121   33043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:22:10.258330   33043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:22:10.275729   33043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:22:10.294239   33043 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:22:10.312810   33043 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:22:10.318284   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:10.474551   33043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:22:10.491027   33043 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:22:10.491051   33043 certs.go:194] generating shared ca certs ...
	I0930 11:22:10.491069   33043 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.491243   33043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:22:10.491283   33043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:22:10.491302   33043 certs.go:256] generating profile certs ...
	I0930 11:22:10.491378   33043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:22:10.491404   33043 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:22:10.491428   33043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:22:10.563349   33043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 ...
	I0930 11:22:10.563384   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8: {Name:mkee749054ef5d747ecd6803933a55d7df9028fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563569   33043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 ...
	I0930 11:22:10.563581   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8: {Name:mk9e9a7e147c3768475898ec896a945ed1a2ca5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563657   33043 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:22:10.563846   33043 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:22:10.563993   33043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:22:10.564009   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:22:10.564024   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:22:10.564040   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:22:10.564063   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:22:10.564079   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:22:10.564094   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:22:10.564108   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:22:10.564123   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:22:10.564204   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:22:10.564237   33043 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:22:10.564251   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:22:10.564279   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:22:10.564308   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:22:10.564350   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:22:10.564409   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:10.564444   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.564467   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.564488   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.565081   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:22:10.592675   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:22:10.618318   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:22:10.644512   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:22:10.671272   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:22:10.697564   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:22:10.722738   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:22:10.749628   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:22:10.776815   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:22:10.803425   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:22:10.831267   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:22:10.857397   33043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:22:10.875093   33043 ssh_runner.go:195] Run: openssl version
	I0930 11:22:10.881398   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:22:10.892677   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897320   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897366   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.903164   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:22:10.912882   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:22:10.923908   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928941   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928987   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.935759   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:22:10.946855   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:22:10.958480   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963160   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963215   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.969693   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:22:10.979808   33043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:22:10.984752   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:22:10.990728   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:22:10.996688   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:22:11.002573   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:22:11.008376   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:22:11.014247   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:22:11.020178   33043 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:22:11.020295   33043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:22:11.020338   33043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:22:11.063287   33043 cri.go:89] found id: "67ee9c49babe93d74d8ee81ea2f17248f722c6211abed7e9723015bda428c4e0"
	I0930 11:22:11.063310   33043 cri.go:89] found id: "e591b4f157ddf0eb6b48bdb31431c92024f32bbe7aa2f96293514fffeed045fe"
	I0930 11:22:11.063314   33043 cri.go:89] found id: "9dc9be1c78f6ce470cf1031b617b8d94b60138c3c1bd738c2bafa9f07db57573"
	I0930 11:22:11.063317   33043 cri.go:89] found id: "5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c"
	I0930 11:22:11.063320   33043 cri.go:89] found id: "856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0"
	I0930 11:22:11.063323   33043 cri.go:89] found id: "2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7"
	I0930 11:22:11.063325   33043 cri.go:89] found id: "347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c"
	I0930 11:22:11.063328   33043 cri.go:89] found id: "6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346"
	I0930 11:22:11.063330   33043 cri.go:89] found id: "7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee"
	I0930 11:22:11.063334   33043 cri.go:89] found id: "aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8"
	I0930 11:22:11.063341   33043 cri.go:89] found id: "e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489"
	I0930 11:22:11.063343   33043 cri.go:89] found id: "2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2"
	I0930 11:22:11.063346   33043 cri.go:89] found id: "cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac"
	I0930 11:22:11.063349   33043 cri.go:89] found id: ""
	I0930 11:22:11.063386   33043 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.855709220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8d5fa76-7724-4b9f-8ece-e7c973db2f55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.855781706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8d5fa76-7724-4b9f-8ece-e7c973db2f55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.856146176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"
metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.ku
bernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a20
4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2
ca88893e1f6ac643a,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85
a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kuberne
tes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5
048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CO
NTAINER_EXITED,CreatedAt:1727694713298082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8d5fa76-7724-4b9f-8ece-e7c973db2f55 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 conmon[4846]: conmon 710727c61a4701ae19d9 <ndebug>: container PID: 4857
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.861425305Z" level=debug msg="Received container pid: 4857" file="oci/runtime_oci.go:284" id=4bcdd7c6-5253-4b8b-8a7c-85495f88ba36 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.873468269Z" level=info msg="Created container 710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1: default/busybox-7dff88458-nbhwc/busybox" file="server/container_create.go:491" id=4bcdd7c6-5253-4b8b-8a7c-85495f88ba36 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.873553487Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,}" file="otel-collector/interceptors.go:74" id=4bcdd7c6-5253-4b8b-8a7c-85495f88ba36 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.874396910Z" level=debug msg="Request: &StartContainerRequest{ContainerId:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,}" file="otel-collector/interceptors.go:62" id=32cc0ea0-43ff-4d90-ae8c-abf5f4f7c37a name=/runtime.v1.RuntimeService/StartContainer
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.874444895Z" level=info msg="Starting container: 710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1" file="server/container_start.go:21" id=32cc0ea0-43ff-4d90-ae8c-abf5f4f7c37a name=/runtime.v1.RuntimeService/StartContainer
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.882086385Z" level=info msg="Started container" PID=4857 containerID=710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1 description=default/busybox-7dff88458-nbhwc/busybox file="server/container_start.go:115" id=32cc0ea0-43ff-4d90-ae8c-abf5f4f7c37a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.896440103Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=32cc0ea0-43ff-4d90-ae8c-abf5f4f7c37a name=/runtime.v1.RuntimeService/StartContainer
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.915052653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc144f19-e042-45ed-881f-db8ea2ef2d68 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.915162706Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc144f19-e042-45ed-881f-db8ea2ef2d68 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.919765318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8d00875-362e-47ca-be5d-fc128b79ea85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.920252416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695370920223980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8d00875-362e-47ca-be5d-fc128b79ea85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.920717564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd664695-9ec3-417e-a4d2-b74d6b058d09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.920848775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd664695-9ec3-417e-a4d2-b74d6b058d09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.921692519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd664695-9ec3-417e-a4d2-b74d6b058d09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.961950146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0702024b-44bd-4964-80b8-c202ae4b1376 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.962050087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0702024b-44bd-4964-80b8-c202ae4b1376 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.963478647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdee61a1-5581-4d7f-8b7a-9bcb02588654 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.963985701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695370963960336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdee61a1-5581-4d7f-8b7a-9bcb02588654 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.964800817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1256728-b55c-4155-b411-946fdfef7874 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.964858299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1256728-b55c-4155-b411-946fdfef7874 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:50 ha-033260 crio[3649]: time="2024-09-30 11:22:50.965204791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1256728-b55c-4155-b411-946fdfef7874 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	710727c61a470       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      Less than a second ago   Running             busybox                   1                   b6882d75e9725       busybox-7dff88458-nbhwc
	553822716c2e0       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      22 seconds ago           Running             kube-vip                  0                   1a455b42d02b7       kube-vip-ha-033260
	cb85b01ef51db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      33 seconds ago           Exited              storage-provisioner       3                   c51802eecf1d4       storage-provisioner
	8702bda4a75f9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      33 seconds ago           Running             coredns                   1                   51f8166139d91       coredns-7c65d6cfc9-5frmm
	d71ac252dcd80       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      33 seconds ago           Running             kindnet-cni               1                   efa9ebbfa94e8       kindnet-g94k6
	5a83e9ce6a32a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      33 seconds ago           Exited              kube-apiserver            2                   72d470d360a7c       kube-apiserver-ha-033260
	bc5f830d46b01       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      33 seconds ago           Running             coredns                   1                   d42975c48cf04       coredns-7c65d6cfc9-kt87v
	0a2a1de86feca       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      33 seconds ago           Running             kube-proxy                1                   0d34e22a6894a       kube-proxy-mxvxr
	9ce2338b980fa       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      33 seconds ago           Exited              kube-controller-manager   1                   9591ef5a18733       kube-controller-manager-ha-033260
	7ddf3925913de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      33 seconds ago           Running             kube-scheduler            1                   36d814d7ad35f       kube-scheduler-ha-033260
	6209266b6bd43       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      33 seconds ago           Running             etcd                      1                   703c1a1dd3cad       etcd-ha-033260
	5d1585ef6941b       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago            Exited              kube-vip                  1                   2bd722c6afa63       kube-vip-ha-033260
	970aed3b1f96b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago            Exited              busybox                   0                   e5a4e140afd6a       busybox-7dff88458-nbhwc
	856f46390ed07       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago           Exited              coredns                   0                   ee2a6eb69b10a       coredns-7c65d6cfc9-kt87v
	2aac013f37bf9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago           Exited              coredns                   0                   724d02dce7a0d       coredns-7c65d6cfc9-5frmm
	347597ebf9b20       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago           Exited              kube-proxy                0                   b08b772dab41d       kube-proxy-mxvxr
	6cf899810e161       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago           Exited              kindnet-cni               0                   b2990036962da       kindnet-g94k6
	aa8ecc81d0af2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago           Exited              etcd                      0                   f789f882a4d3c       etcd-ha-033260
	2435a21a0f6f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago           Exited              kube-scheduler            0                   fd27dbf29ee9b       kube-scheduler-ha-033260
	
	
	==> coredns [2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7] <==
	[INFO] 10.244.1.2:52635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028646205s
	[INFO] 10.244.1.2:41853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176274s
	[INFO] 10.244.1.2:35962 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170835s
	[INFO] 10.244.0.4:41550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130972s
	[INFO] 10.244.0.4:32938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173381s
	[INFO] 10.244.0.4:56409 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073902s
	[INFO] 10.244.2.2:58163 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268677s
	[INFO] 10.244.2.2:36365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010796s
	[INFO] 10.244.2.2:56656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115088s
	[INFO] 10.244.2.2:56306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139171s
	[INFO] 10.244.1.2:35824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200215s
	[INFO] 10.244.1.2:55897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096777s
	[INFO] 10.244.1.2:41692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109849s
	[INFO] 10.244.0.4:40290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106794s
	[INFO] 10.244.0.4:46779 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132069s
	[INFO] 10.244.1.2:51125 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201243s
	[INFO] 10.244.1.2:54698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184568s
	[INFO] 10.244.0.4:53882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193917s
	[INFO] 10.244.0.4:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121126s
	[INFO] 10.244.2.2:58238 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117978s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0] <==
	[INFO] 10.244.0.4:53761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001344827s
	[INFO] 10.244.0.4:59481 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051804s
	[INFO] 10.244.2.2:39523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137336s
	[INFO] 10.244.2.2:35477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002190323s
	[INFO] 10.244.2.2:37515 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001525471s
	[INFO] 10.244.2.2:34201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119381s
	[INFO] 10.244.1.2:42886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230949s
	[INFO] 10.244.0.4:43156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079033s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010674s
	[INFO] 10.244.2.2:47730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245903s
	[INFO] 10.244.2.2:54559 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165285s
	[INFO] 10.244.2.2:56225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115859s
	[INFO] 10.244.2.2:54334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001069s
	[INFO] 10.244.1.2:43809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130742s
	[INFO] 10.244.1.2:56685 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199555s
	[INFO] 10.244.0.4:44188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154269s
	[INFO] 10.244.0.4:56530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138351s
	[INFO] 10.244.2.2:34814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138709s
	[INFO] 10.244.2.2:49549 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124443s
	[INFO] 10.244.2.2:35669 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100712s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1725&timeout=8m58s&timeoutSeconds=538&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1725&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1722&timeout=6m53s&timeoutSeconds=413&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e] <==
	Trace[133928459]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:33.952)
	Trace[133928459]: [10.001920301s] [10.001920301s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[461234772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:24.420) (total time: 10001ms):
	Trace[461234772]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:34.421)
	Trace[461234772]: [10.001239871s] [10.001239871s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1495932049]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:24.566) (total time: 10001ms):
	Trace[1495932049]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:34.568)
	Trace[1495932049]: [10.00173189s] [10.00173189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1696823593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:23.560) (total time: 10001ms):
	Trace[1696823593]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:33.562)
	Trace[1696823593]: [10.001227226s] [10.001227226s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1783554872]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:26.582) (total time: 10001ms):
	Trace[1783554872]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:36.583)
	Trace[1783554872]: [10.001386462s] [10.001386462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:55110->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:55110->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.651623] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058580] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170861] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.144465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.293344] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.055212] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.356595] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.065791] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.315036] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.090322] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 11:12] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.137075] kauditd_printk_skb: 38 callbacks suppressed
	[Sep30 11:13] kauditd_printk_skb: 24 callbacks suppressed
	[Sep30 11:19] kauditd_printk_skb: 1 callbacks suppressed
	[Sep30 11:22] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.171159] systemd-fstab-generator[3584]: Ignoring "noauto" option for root device
	[  +0.178992] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +0.163482] systemd-fstab-generator[3610]: Ignoring "noauto" option for root device
	[  +0.302477] systemd-fstab-generator[3638]: Ignoring "noauto" option for root device
	[  +1.773800] systemd-fstab-generator[3737]: Ignoring "noauto" option for root device
	[  +6.561875] kauditd_printk_skb: 122 callbacks suppressed
	[ +11.942924] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.334690] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699] <==
	{"level":"info","ts":"2024-09-30T11:22:47.225815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:47.225842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	{"level":"warn","ts":"2024-09-30T11:22:47.375903Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:47.877121Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:48.378149Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:48.405680Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1e3ede80da48fb5a","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:22:48.413124Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1e3ede80da48fb5a","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:22:48.417744Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-30T11:22:48.418952Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: i/o timeout"}
	{"level":"warn","ts":"2024-09-30T11:22:48.879139Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-30T11:22:48.925819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:48.925931Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:48.925966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:48.925999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:48.926039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	{"level":"warn","ts":"2024-09-30T11:22:49.380336Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:49.880937Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:50.381263Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-30T11:22:50.625433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	{"level":"warn","ts":"2024-09-30T11:22:50.881859Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:51.382006Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	
	
	==> etcd [aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8] <==
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-30T11:20:36.370065Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:20:36.370167Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T11:20:36.370398Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-30T11:20:36.370773Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.370856Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.370907Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371048Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371109Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371265Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371274Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371355Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371462Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371746Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371825Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371896Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371926Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.375304Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"warn","ts":"2024-09-30T11:20:36.375330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.79480238s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-30T11:20:36.375455Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-09-30T11:20:36.375483Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-033260","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	{"level":"info","ts":"2024-09-30T11:20:36.375455Z","caller":"traceutil/trace.go:171","msg":"trace[311727536] range","detail":"{range_begin:; range_end:; }","duration":"8.794947939s","start":"2024-09-30T11:20:27.580499Z","end":"2024-09-30T11:20:36.375447Z","steps":["trace[311727536] 'agreement among raft nodes before linearized reading'  (duration: 8.794800052s)"],"step_count":1}
	
	
	==> kernel <==
	 11:22:51 up 11 min,  0 users,  load average: 0.38, 0.44, 0.24
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346] <==
	I0930 11:20:07.854375       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:07.854476       1 main.go:299] handling current node
	I0930 11:20:07.854505       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:07.854522       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:07.854768       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:07.854802       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:07.854864       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:07.854882       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:20:17.862189       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:17.862294       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:20:17.862450       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:17.862475       1 main.go:299] handling current node
	I0930 11:20:17.862496       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:17.862512       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:17.862579       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:17.862598       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:27.860296       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:27.860570       1 main.go:299] handling current node
	I0930 11:20:27.860675       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:27.860705       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:27.860964       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:27.860987       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:27.861050       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:27.861068       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	E0930 11:20:36.321163       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22] <==
	I0930 11:22:18.408919       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0930 11:22:18.411055       1 main.go:139] hostIP = 192.168.39.249
	podIP = 192.168.39.249
	I0930 11:22:18.411374       1 main.go:148] setting mtu 1500 for CNI 
	I0930 11:22:18.412734       1 main.go:178] kindnetd IP family: "ipv4"
	I0930 11:22:18.412859       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0930 11:22:19.102768       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	W0930 11:22:29.112365       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0930 11:22:29.113034       1 trace.go:236] Trace[1532943992]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (30-Sep-2024 11:22:19.102) (total time: 10009ms):
	Trace[1532943992]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10009ms (11:22:29.112)
	Trace[1532943992]: [10.009569146s] [10.009569146s] END
	E0930 11:22:29.113093       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0930 11:22:39.966399       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.15:34582->10.96.0.1:443: read: connection reset by peer
	E0930 11:22:39.966874       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.15:34582->10.96.0.1:443: read: connection reset by peer
	W0930 11:22:42.909485       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	E0930 11:22:42.909698       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3] <==
	I0930 11:22:18.351040       1 options.go:228] external host was not specified, using 192.168.39.249
	I0930 11:22:18.358932       1 server.go:142] Version: v1.31.1
	I0930 11:22:18.358991       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:22:18.951862       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 11:22:18.952819       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:22:18.958733       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 11:22:18.958775       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 11:22:18.959115       1 instance.go:232] Using reconciler: lease
	W0930 11:22:38.930134       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0930 11:22:38.930367       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0930 11:22:38.959894       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0] <==
	I0930 11:22:18.773138       1 serving.go:386] Generated self-signed cert in-memory
	I0930 11:22:19.094923       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 11:22:19.094962       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:22:19.097540       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 11:22:19.097832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:22:19.097963       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:22:19.098124       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0930 11:22:39.965712       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030] <==
	I0930 11:22:19.122779       1 server_linux.go:66] "Using iptables proxy"
	E0930 11:22:19.157903       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:22:19.183052       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:22:19.549933       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:22.621776       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:25.694800       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:31.839062       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:44.126368       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	
	
	==> kube-proxy [347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c] <==
	E0930 11:19:18.749153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:18.749098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:18.749189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:21.949265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:21.949358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:25.021001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:25.021082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:25.021248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:25.021281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:28.093023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:28.093160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:33.661953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:33.662031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:36.734817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:36.734886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:36.735220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:36.735300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:52.094043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:52.094172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:01.309076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:01.309151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:01.309250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:01.309280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:32.031340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:32.032079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2] <==
	E0930 11:15:14.688017       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c071322f-794b-4d6f-a33a-92077352ef5d(kube-system/kindnet-kb2cp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kb2cp"
	E0930 11:15:14.688032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-kb2cp"
	I0930 11:15:14.688047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.701899       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nbts6" node="ha-033260-m04"
	E0930 11:15:14.702003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-nbts6"
	E0930 11:15:14.702565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:15:14.705542       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2de7434-03f1-4bbc-ab62-3101483908c1(kube-system/kube-proxy-cr58q) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-cr58q"
	E0930 11:15:14.705602       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-cr58q"
	I0930 11:15:14.705671       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:20:20.605503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0930 11:20:22.554431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:23.502921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0930 11:20:23.897499       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0930 11:20:25.058394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0930 11:20:25.967971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:26.690085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:26.719785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0930 11:20:28.017485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0930 11:20:28.510153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0930 11:20:28.970490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:30.680510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0930 11:20:30.842087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0930 11:20:32.435288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0930 11:20:34.062903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0930 11:20:36.288434       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801] <==
	E0930 11:22:47.146722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:47.351451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:47.351518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:47.380395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:47.380458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:47.462180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:47.462250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.133417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.133538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.385125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.385238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.492946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.493080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.797564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.797767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.836125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.836250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.876443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.876506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.938087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.938223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:49.872885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:49.873029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:50.083149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:50.083283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 30 11:22:41 ha-033260 kubelet[1307]: I0930 11:22:41.053320    1307 status_manager.go:851] "Failed to get status for pod" podUID="964381ab-f2ac-4361-a7e0-5212fff5e26e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:41 ha-033260 kubelet[1307]: E0930 11:22:41.053611    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:41 ha-033260 kubelet[1307]: I0930 11:22:41.559130    1307 scope.go:117] "RemoveContainer" containerID="9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0"
	Sep 30 11:22:41 ha-033260 kubelet[1307]: E0930 11:22:41.559345    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-033260_kube-system(43955f8cf95999657a88952585c93768)\"" pod="kube-system/kube-controller-manager-ha-033260" podUID="43955f8cf95999657a88952585c93768"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: W0930 11:22:44.125186    1307 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1765": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125716    1307 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1765\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125576    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: I0930 11:22:44.125406    1307 status_manager.go:851] "Failed to get status for pod" podUID="4ee6f0cb154890b5d1bf6173256957d4" pod="kube-system/etcd-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125276    1307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: I0930 11:22:46.247863    1307 scope.go:117] "RemoveContainer" containerID="9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: E0930 11:22:46.248393    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-033260_kube-system(43955f8cf95999657a88952585c93768)\"" pod="kube-system/kube-controller-manager-ha-033260" podUID="43955f8cf95999657a88952585c93768"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: I0930 11:22:46.864719    1307 scope.go:117] "RemoveContainer" containerID="5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: E0930 11:22:46.864888    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-033260_kube-system(6c1732ebd63e52d0c6ac6d9cd648cff5)\"" pod="kube-system/kube-apiserver-ha-033260" podUID="6c1732ebd63e52d0c6ac6d9cd648cff5"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: W0930 11:22:47.197121    1307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197334    1307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197299    1307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-033260.17fa018ec87563f8  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-033260,UID:6c1732ebd63e52d0c6ac6d9cd648cff5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-033260,},FirstTimestamp:2024-09-30 11:18:39.81012684 +0000 UTC m=+400.493343197,LastTimestamp:2024-09-30 11:18:39.81012684 +0000 UTC m=+400.493343197,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:
nil,ReportingController:kubelet,ReportingInstance:ha-033260,}"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: I0930 11:22:47.197407    1307 status_manager.go:851] "Failed to get status for pod" podUID="734999721cb3f48c24354599fcaf3db2" pod="kube-system/kube-scheduler-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197256    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: W0930 11:22:47.197144    1307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1681": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197942    1307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1681\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:49 ha-033260 kubelet[1307]: E0930 11:22:49.688509    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695369688149390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:22:49 ha-033260 kubelet[1307]: E0930 11:22:49.688533    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695369688149390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: E0930 11:22:50.269074    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: E0930 11:22:50.269229    1307 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: I0930 11:22:50.269083    1307 status_manager.go:851] "Failed to get status for pod" podUID="6c1732ebd63e52d0c6ac6d9cd648cff5" pod="kube-system/kube-apiserver-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:22:50.373078   33696 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19734-3842/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260: exit status 2 (224.811978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-033260" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (258.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (2.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-033260 node delete m03 -v=7 --alsologtostderr: exit status 83 (131.857927ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-033260-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-033260"

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:22:52.197196   33778 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:22:52.197490   33778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:22:52.197501   33778 out.go:358] Setting ErrFile to fd 2...
	I0930 11:22:52.197507   33778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:22:52.197699   33778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:22:52.197958   33778 mustload.go:65] Loading cluster: ha-033260
	I0930 11:22:52.198327   33778 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:22:52.198663   33778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.198713   33778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.213554   33778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41153
	I0930 11:22:52.214005   33778 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.214540   33778 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.214568   33778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.214913   33778 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.215083   33778 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:22:52.216672   33778 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:22:52.216940   33778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.216978   33778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.231335   33778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0930 11:22:52.231769   33778 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.232248   33778 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.232268   33778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.232579   33778 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.232732   33778 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:52.233147   33778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.233184   33778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.247827   33778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0930 11:22:52.248178   33778 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.248614   33778 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.248636   33778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.248932   33778 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.249120   33778 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:22:52.250448   33778 host.go:66] Checking if "ha-033260-m02" exists ...
	I0930 11:22:52.250746   33778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.250803   33778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.265080   33778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0930 11:22:52.265488   33778 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.265935   33778 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.265952   33778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.266236   33778 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.266381   33778 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:22:52.266893   33778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.266935   33778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.281590   33778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0930 11:22:52.282019   33778 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.282530   33778 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.282549   33778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.282858   33778 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.283019   33778 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:22:52.286911   33778 out.go:177] * The control-plane node ha-033260-m03 host is not running: state=Stopped
	I0930 11:22:52.288486   33778 out.go:177]   To start a cluster, run: "minikube start -p ha-033260"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-linux-amd64 -p ha-033260 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr: exit status 7 (441.400194ms)

                                                
                                                
-- stdout --
	ha-033260
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-033260-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-033260-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-033260-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:22:52.330410   33820 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:22:52.330515   33820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:22:52.330524   33820 out.go:358] Setting ErrFile to fd 2...
	I0930 11:22:52.330529   33820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:22:52.330701   33820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:22:52.330873   33820 out.go:352] Setting JSON to false
	I0930 11:22:52.330895   33820 mustload.go:65] Loading cluster: ha-033260
	I0930 11:22:52.330943   33820 notify.go:220] Checking for updates...
	I0930 11:22:52.331317   33820 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:22:52.331333   33820 status.go:174] checking status of ha-033260 ...
	I0930 11:22:52.331737   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.331802   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.350534   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I0930 11:22:52.350991   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.351612   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.351643   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.351974   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.352140   33820 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:22:52.353653   33820 status.go:364] ha-033260 host status = "Running" (err=<nil>)
	I0930 11:22:52.353669   33820 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:22:52.353944   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.353985   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.368590   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45029
	I0930 11:22:52.369015   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.369475   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.369494   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.369812   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.369979   33820 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:52.372363   33820 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:52.372797   33820 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:52.372821   33820 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:52.372954   33820 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:22:52.373241   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.373281   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.389092   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I0930 11:22:52.389849   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.390386   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.390405   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.390742   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.390908   33820 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:52.391087   33820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:22:52.391114   33820 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:52.393822   33820 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:52.394261   33820 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:52.394290   33820 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:52.394420   33820 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:52.394574   33820 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:52.394731   33820 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:52.394840   33820 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:52.477498   33820 ssh_runner.go:195] Run: systemctl --version
	I0930 11:22:52.484170   33820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:22:52.507876   33820 kubeconfig.go:125] found "ha-033260" server: "https://192.168.39.254:8443"
	I0930 11:22:52.507912   33820 api_server.go:166] Checking apiserver status ...
	I0930 11:22:52.507954   33820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0930 11:22:52.521995   33820 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:22:52.522015   33820 status.go:456] ha-033260 apiserver status = Running (err=<nil>)
	I0930 11:22:52.522024   33820 status.go:176] ha-033260 status: &{Name:ha-033260 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:22:52.522040   33820 status.go:174] checking status of ha-033260-m02 ...
	I0930 11:22:52.522441   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.522484   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.538293   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I0930 11:22:52.538709   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.539165   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.539184   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.539560   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.539735   33820 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:22:52.541134   33820 status.go:364] ha-033260-m02 host status = "Running" (err=<nil>)
	I0930 11:22:52.541150   33820 host.go:66] Checking if "ha-033260-m02" exists ...
	I0930 11:22:52.541423   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.541454   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.556483   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0930 11:22:52.557008   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.557505   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.557521   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.557876   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.558055   33820 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:22:52.560637   33820 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:22:52.560991   33820 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:22:22 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:22:52.561022   33820 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:22:52.561137   33820 host.go:66] Checking if "ha-033260-m02" exists ...
	I0930 11:22:52.561498   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.561541   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.576283   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45327
	I0930 11:22:52.576704   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.577199   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.577225   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.577491   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.577704   33820 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:22:52.577873   33820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:22:52.577896   33820 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:22:52.580703   33820 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:22:52.581113   33820 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:22:22 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:22:52.581137   33820 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:22:52.581253   33820 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:22:52.581433   33820 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:22:52.581546   33820 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:22:52.581707   33820 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:22:52.665312   33820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:22:52.680402   33820 kubeconfig.go:125] found "ha-033260" server: "https://192.168.39.254:8443"
	I0930 11:22:52.680431   33820 api_server.go:166] Checking apiserver status ...
	I0930 11:22:52.680470   33820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0930 11:22:52.693917   33820 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:22:52.693940   33820 status.go:456] ha-033260-m02 apiserver status = Stopped (err=<nil>)
	I0930 11:22:52.693948   33820 status.go:176] ha-033260-m02 status: &{Name:ha-033260-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:22:52.693961   33820 status.go:174] checking status of ha-033260-m03 ...
	I0930 11:22:52.694258   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.694295   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.709013   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44581
	I0930 11:22:52.709442   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.709900   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.709915   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.710170   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.710351   33820 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:22:52.711814   33820 status.go:364] ha-033260-m03 host status = "Stopped" (err=<nil>)
	I0930 11:22:52.711827   33820 status.go:377] host is not running, skipping remaining checks
	I0930 11:22:52.711832   33820 status.go:176] ha-033260-m03 status: &{Name:ha-033260-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:22:52.711860   33820 status.go:174] checking status of ha-033260-m04 ...
	I0930 11:22:52.712233   33820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:52.712275   33820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:52.727159   33820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0930 11:22:52.727682   33820 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:52.728179   33820 main.go:141] libmachine: Using API Version  1
	I0930 11:22:52.728199   33820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:52.728565   33820 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:52.728748   33820 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:22:52.730538   33820 status.go:364] ha-033260-m04 host status = "Stopped" (err=<nil>)
	I0930 11:22:52.730553   33820 status.go:377] host is not running, skipping remaining checks
	I0930 11:22:52.730573   33820 status.go:176] ha-033260-m04 status: &{Name:ha-033260-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260: exit status 2 (228.711462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.393874571s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260 -v=7                                                           | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-033260 -v=7                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	| node    | ha-033260 node delete m03 -v=7                                                   | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:20:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:20:35.412602   33043 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:20:35.412849   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.412858   33043 out.go:358] Setting ErrFile to fd 2...
	I0930 11:20:35.412863   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.413024   33043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:20:35.413552   33043 out.go:352] Setting JSON to false
	I0930 11:20:35.414491   33043 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3782,"bootTime":1727691453,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:20:35.414596   33043 start.go:139] virtualization: kvm guest
	I0930 11:20:35.416608   33043 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:20:35.417763   33043 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:20:35.417777   33043 notify.go:220] Checking for updates...
	I0930 11:20:35.420438   33043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:20:35.421852   33043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:20:35.423268   33043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:20:35.424519   33043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:20:35.425736   33043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:20:35.427423   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:35.427536   33043 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:20:35.428064   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.428107   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.443112   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0930 11:20:35.443682   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.444204   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.444222   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.444550   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.444728   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.482622   33043 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:20:35.483910   33043 start.go:297] selected driver: kvm2
	I0930 11:20:35.483927   33043 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.484109   33043 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:20:35.484423   33043 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.484521   33043 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:20:35.500176   33043 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:20:35.500994   33043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:20:35.501027   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:20:35.501074   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:20:35.501131   33043 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.501263   33043 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.503184   33043 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:20:35.504511   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:20:35.504563   33043 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:20:35.504573   33043 cache.go:56] Caching tarball of preloaded images
	I0930 11:20:35.504731   33043 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:20:35.504748   33043 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:20:35.504904   33043 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:20:35.505134   33043 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:20:35.505183   33043 start.go:364] duration metric: took 27.274µs to acquireMachinesLock for "ha-033260"
	I0930 11:20:35.505203   33043 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:20:35.505236   33043 fix.go:54] fixHost starting: 
	I0930 11:20:35.505507   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.505539   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.520330   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0930 11:20:35.520763   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.521246   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.521267   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.521605   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.521835   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.521965   33043 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:20:35.523567   33043 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:20:35.523602   33043 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:20:35.525750   33043 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:20:35.527061   33043 machine.go:93] provisionDockerMachine start ...
	I0930 11:20:35.527088   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.527326   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.530036   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530579   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.530600   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530780   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.530958   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531111   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.531336   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.531561   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.531576   33043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:20:35.649365   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.649400   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649690   33043 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:20:35.649710   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649919   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.652623   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653056   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.653103   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653299   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.653488   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653688   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653834   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.653997   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.654241   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.654260   33043 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:20:35.785013   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.785047   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.788437   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.788960   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.788993   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.789200   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.789404   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789576   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789719   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.789879   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.790046   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.790061   33043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:20:35.902798   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:20:35.902835   33043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:20:35.902868   33043 buildroot.go:174] setting up certificates
	I0930 11:20:35.902885   33043 provision.go:84] configureAuth start
	I0930 11:20:35.902905   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.903213   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:20:35.905874   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906221   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.906243   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906402   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.908695   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909090   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.909113   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909309   33043 provision.go:143] copyHostCerts
	I0930 11:20:35.909340   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909394   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:20:35.909406   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909486   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:20:35.909601   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909636   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:20:35.909647   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909686   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:20:35.909766   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909790   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:20:35.909794   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909825   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:20:35.909903   33043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:20:35.980635   33043 provision.go:177] copyRemoteCerts
	I0930 11:20:35.980685   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:20:35.980706   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.983637   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.983980   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.983998   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.984309   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.984502   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.984684   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.984848   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:20:36.072953   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:20:36.073023   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:20:36.102423   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:20:36.102509   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:20:36.135815   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:20:36.135913   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:20:36.166508   33043 provision.go:87] duration metric: took 263.6024ms to configureAuth
	I0930 11:20:36.166535   33043 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:20:36.166819   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:36.166934   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:36.169482   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.169896   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:36.169922   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.170125   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:36.170342   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170514   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170642   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:36.170792   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:36.170996   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:36.171017   33043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:22:06.983121   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:22:06.983146   33043 machine.go:96] duration metric: took 1m31.456067098s to provisionDockerMachine
	I0930 11:22:06.983157   33043 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:22:06.983167   33043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:22:06.983186   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:06.983540   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:22:06.983587   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:06.986877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987470   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:06.987488   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987723   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:06.987912   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:06.988044   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:06.988157   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.073758   33043 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:22:07.078469   33043 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:22:07.078512   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:22:07.078605   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:22:07.078699   33043 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:22:07.078713   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:22:07.078804   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:22:07.089555   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:07.116194   33043 start.go:296] duration metric: took 133.023032ms for postStartSetup
	I0930 11:22:07.116254   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.116551   33043 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:22:07.116577   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.119461   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.119823   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.119858   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.120010   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.120203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.120359   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.120470   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	W0930 11:22:07.204626   33043 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0930 11:22:07.204654   33043 fix.go:56] duration metric: took 1m31.699418607s for fixHost
	I0930 11:22:07.204673   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.207768   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208205   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.208236   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208426   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.208670   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208815   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208920   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.209074   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:22:07.209303   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:22:07.209317   33043 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:22:07.318615   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695327.281322937
	
	I0930 11:22:07.318635   33043 fix.go:216] guest clock: 1727695327.281322937
	I0930 11:22:07.318652   33043 fix.go:229] Guest: 2024-09-30 11:22:07.281322937 +0000 UTC Remote: 2024-09-30 11:22:07.204660834 +0000 UTC m=+91.828672682 (delta=76.662103ms)
	I0930 11:22:07.318687   33043 fix.go:200] guest clock delta is within tolerance: 76.662103ms
	I0930 11:22:07.318695   33043 start.go:83] releasing machines lock for "ha-033260", held for 1m31.813499324s
	I0930 11:22:07.318717   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.318982   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:07.321877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322412   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.322444   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322594   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323100   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323285   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323407   33043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:22:07.323451   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.323488   33043 ssh_runner.go:195] Run: cat /version.json
	I0930 11:22:07.323513   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.326064   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326202   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326521   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326548   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326576   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326591   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326637   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326826   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.326854   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326968   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327118   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.327178   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.327254   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327385   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.415864   33043 ssh_runner.go:195] Run: systemctl --version
	I0930 11:22:07.451247   33043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:22:07.632639   33043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:22:07.641688   33043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:22:07.641764   33043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:22:07.651983   33043 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:22:07.652031   33043 start.go:495] detecting cgroup driver to use...
	I0930 11:22:07.652103   33043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:22:07.669168   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:22:07.684823   33043 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:22:07.684912   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:22:07.701483   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:22:07.716518   33043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:22:07.896967   33043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:22:08.050310   33043 docker.go:233] disabling docker service ...
	I0930 11:22:08.050371   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:22:08.068482   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:22:08.084459   33043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:22:08.236128   33043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:22:08.390802   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:22:08.406104   33043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:22:08.427375   33043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:22:08.427446   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.438743   33043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:22:08.438847   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.452067   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.463557   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.475079   33043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:22:08.487336   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.498829   33043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.511516   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.523240   33043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:22:08.533544   33043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:22:08.544108   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:08.698933   33043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:22:09.935253   33043 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.236281542s)
	I0930 11:22:09.935282   33043 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:22:09.935334   33043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:22:09.940570   33043 start.go:563] Will wait 60s for crictl version
	I0930 11:22:09.940624   33043 ssh_runner.go:195] Run: which crictl
	I0930 11:22:09.945362   33043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:22:09.989303   33043 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:22:09.989390   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.021074   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.054999   33043 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:22:10.056435   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:10.059297   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.059696   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:10.059727   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.060000   33043 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:22:10.065633   33043 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:22:10.065825   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:22:10.065888   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.114243   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.114265   33043 crio.go:433] Images already preloaded, skipping extraction
	I0930 11:22:10.114317   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.150653   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.150674   33043 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:22:10.150709   33043 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:22:10.150850   33043 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:22:10.150941   33043 ssh_runner.go:195] Run: crio config
	I0930 11:22:10.206136   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:22:10.206155   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:22:10.206167   33043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:22:10.206190   33043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:22:10.206332   33043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:22:10.206353   33043 kube-vip.go:115] generating kube-vip config ...
	I0930 11:22:10.206392   33043 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:22:10.219053   33043 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:22:10.219173   33043 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:22:10.219254   33043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:22:10.229908   33043 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:22:10.230004   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:22:10.240121   33043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:22:10.258330   33043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:22:10.275729   33043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:22:10.294239   33043 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:22:10.312810   33043 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:22:10.318284   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:10.474551   33043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:22:10.491027   33043 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:22:10.491051   33043 certs.go:194] generating shared ca certs ...
	I0930 11:22:10.491069   33043 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.491243   33043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:22:10.491283   33043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:22:10.491302   33043 certs.go:256] generating profile certs ...
	I0930 11:22:10.491378   33043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:22:10.491404   33043 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:22:10.491428   33043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:22:10.563349   33043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 ...
	I0930 11:22:10.563384   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8: {Name:mkee749054ef5d747ecd6803933a55d7df9028fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563569   33043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 ...
	I0930 11:22:10.563581   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8: {Name:mk9e9a7e147c3768475898ec896a945ed1a2ca5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563657   33043 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:22:10.563846   33043 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:22:10.563993   33043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:22:10.564009   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:22:10.564024   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:22:10.564040   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:22:10.564063   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:22:10.564079   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:22:10.564094   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:22:10.564108   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:22:10.564123   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:22:10.564204   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:22:10.564237   33043 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:22:10.564251   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:22:10.564279   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:22:10.564308   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:22:10.564350   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:22:10.564409   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:10.564444   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.564467   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.564488   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.565081   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:22:10.592675   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:22:10.618318   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:22:10.644512   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:22:10.671272   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:22:10.697564   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:22:10.722738   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:22:10.749628   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:22:10.776815   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:22:10.803425   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:22:10.831267   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:22:10.857397   33043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:22:10.875093   33043 ssh_runner.go:195] Run: openssl version
	I0930 11:22:10.881398   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:22:10.892677   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897320   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897366   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.903164   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:22:10.912882   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:22:10.923908   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928941   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928987   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.935759   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:22:10.946855   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:22:10.958480   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963160   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963215   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.969693   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:22:10.979808   33043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:22:10.984752   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:22:10.990728   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:22:10.996688   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:22:11.002573   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:22:11.008376   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:22:11.014247   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:22:11.020178   33043 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:22:11.020295   33043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:22:11.020338   33043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:22:11.063287   33043 cri.go:89] found id: "67ee9c49babe93d74d8ee81ea2f17248f722c6211abed7e9723015bda428c4e0"
	I0930 11:22:11.063310   33043 cri.go:89] found id: "e591b4f157ddf0eb6b48bdb31431c92024f32bbe7aa2f96293514fffeed045fe"
	I0930 11:22:11.063314   33043 cri.go:89] found id: "9dc9be1c78f6ce470cf1031b617b8d94b60138c3c1bd738c2bafa9f07db57573"
	I0930 11:22:11.063317   33043 cri.go:89] found id: "5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c"
	I0930 11:22:11.063320   33043 cri.go:89] found id: "856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0"
	I0930 11:22:11.063323   33043 cri.go:89] found id: "2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7"
	I0930 11:22:11.063325   33043 cri.go:89] found id: "347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c"
	I0930 11:22:11.063328   33043 cri.go:89] found id: "6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346"
	I0930 11:22:11.063330   33043 cri.go:89] found id: "7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee"
	I0930 11:22:11.063334   33043 cri.go:89] found id: "aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8"
	I0930 11:22:11.063341   33043 cri.go:89] found id: "e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489"
	I0930 11:22:11.063343   33043 cri.go:89] found id: "2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2"
	I0930 11:22:11.063346   33043 cri.go:89] found id: "cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac"
	I0930 11:22:11.063349   33043 cri.go:89] found id: ""
	I0930 11:22:11.063386   33043 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.348613641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45159196-1471-4e63-b9f1-b6f3482f587f name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.350871576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c0cab8e-d5f6-477c-b0ea-6df0893d515b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.351315759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695373351290396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c0cab8e-d5f6-477c-b0ea-6df0893d515b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.352283375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c08dfbf8-b93f-40d7-8bcd-4c2bb02437e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.352351990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c08dfbf8-b93f-40d7-8bcd-4c2bb02437e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.352882923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c08dfbf8-b93f-40d7-8bcd-4c2bb02437e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.405535573Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11a3d351-ba5a-41c7-9b3a-f4cac00b09d6 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.405811848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11a3d351-ba5a-41c7-9b3a-f4cac00b09d6 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.407702571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9061cd8-04f8-466e-b6ce-67b930e34ccb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.408177532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695373408154289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9061cd8-04f8-466e-b6ce-67b930e34ccb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.408834083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c48313a9-6c7e-4b43-947a-a961bab0f241 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.408914182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c48313a9-6c7e-4b43-947a-a961bab0f241 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.409281642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c48313a9-6c7e-4b43-947a-a961bab0f241 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.443292009Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c095696-3319-4e41-b2a5-1ec3299d59a4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.443576597Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-nbhwc,Uid:e62e1e44-3723-496c-85a3-7a79e9c8264b,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695370646220824,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:14:38.675928095Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-033260,Uid:89735afe181ab1f81ff05fc69dd5d08e,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1727695348774040104,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{kubernetes.io/config.hash: 89735afe181ab1f81ff05fc69dd5d08e,kubernetes.io/config.seen: 2024-09-30T11:22:10.276794807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5frmm,Uid:7333717d-95d5-4990-bac9-8443a51eee97,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695337095202863,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09
-30T11:12:18.075315913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-033260,Uid:6c1732ebd63e52d0c6ac6d9cd648cff5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336958964052,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.249:8443,kubernetes.io/config.hash: 6c1732ebd63e52d0c6ac6d9cd648cff5,kubernetes.io/config.seen: 2024-09-30T11:11:59.436984477Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&PodSandboxMetadata{Name:kindnet-g94k6,Uid:260e385d-9e17-4
af8-a854-8683afb714c4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336941413983,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:12:04.361135889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kt87v,Uid:26f75c31-d44d-4a4c-8048-b6ce5c824151,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336938360844,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151
,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:12:18.066691994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&PodSandboxMetadata{Name:etcd-ha-033260,Uid:4ee6f0cb154890b5d1bf6173256957d4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336914010852,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.249:2379,kubernetes.io/config.hash: 4ee6f0cb154890b5d1bf6173256957d4,kubernetes.io/config.seen: 2024-09-30T11:11:59.436980848Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa4
4c8f158e4b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-033260,Uid:734999721cb3f48c24354599fcaf3db2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336899147915,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 734999721cb3f48c24354599fcaf3db2,kubernetes.io/config.seen: 2024-09-30T11:11:59.436987205Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:964381ab-f2ac-4361-a7e0-5212fff5e26e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336898021330,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T11:12:18.074715472Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&PodSandboxMetadata{Name:kube-proxy-mxvxr,Uid:314da0b5-6242-4af0-8e99-d0aaba82a43e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336895455730,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:12:04.378056083Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-033260,Uid:43955f8cf95999657a88952585c93768,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727695336883785768,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.
container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 43955f8cf95999657a88952585c93768,kubernetes.io/config.seen: 2024-09-30T11:11:59.436985887Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9c095696-3319-4e41-b2a5-1ec3299d59a4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.444341102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6962131c-de4f-4bc7-9d6c-9d7bfafea7f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.444457704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6962131c-de4f-4bc7-9d6c-9d7bfafea7f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.444697546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPo
rt\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204
d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6962131c-de4f-4bc7-9d6c-9d7bfafea7f4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.453134957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9398d504-1fb0-4ae9-a03c-3f076ff6abfb name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.453221018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9398d504-1fb0-4ae9-a03c-3f076ff6abfb name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.454517828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=347af43a-4948-4601-b9a1-f962a6fdf6be name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.454994002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695373454971735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=347af43a-4948-4601-b9a1-f962a6fdf6be name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.455767208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41d8699a-58a4-414b-b46c-f14a39a1b1ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.455850057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41d8699a-58a4-414b-b46c-f14a39a1b1ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:53 ha-033260 crio[3649]: time="2024-09-30 11:22:53.456221826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41d8699a-58a4-414b-b46c-f14a39a1b1ca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	710727c61a470       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 seconds ago       Running             busybox                   1                   b6882d75e9725       busybox-7dff88458-nbhwc
	553822716c2e0       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      24 seconds ago      Running             kube-vip                  0                   1a455b42d02b7       kube-vip-ha-033260
	cb85b01ef51db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      35 seconds ago      Exited              storage-provisioner       3                   c51802eecf1d4       storage-provisioner
	8702bda4a75f9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      35 seconds ago      Running             coredns                   1                   51f8166139d91       coredns-7c65d6cfc9-5frmm
	d71ac252dcd80       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      35 seconds ago      Running             kindnet-cni               1                   efa9ebbfa94e8       kindnet-g94k6
	5a83e9ce6a32a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      35 seconds ago      Exited              kube-apiserver            2                   72d470d360a7c       kube-apiserver-ha-033260
	bc5f830d46b01       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      35 seconds ago      Running             coredns                   1                   d42975c48cf04       coredns-7c65d6cfc9-kt87v
	0a2a1de86feca       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      36 seconds ago      Running             kube-proxy                1                   0d34e22a6894a       kube-proxy-mxvxr
	9ce2338b980fa       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      36 seconds ago      Exited              kube-controller-manager   1                   9591ef5a18733       kube-controller-manager-ha-033260
	7ddf3925913de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      36 seconds ago      Running             kube-scheduler            1                   36d814d7ad35f       kube-scheduler-ha-033260
	6209266b6bd43       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      36 seconds ago      Running             etcd                      1                   703c1a1dd3cad       etcd-ha-033260
	5d1585ef6941b       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Exited              kube-vip                  1                   2bd722c6afa63       kube-vip-ha-033260
	970aed3b1f96b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   e5a4e140afd6a       busybox-7dff88458-nbhwc
	856f46390ed07       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   ee2a6eb69b10a       coredns-7c65d6cfc9-kt87v
	2aac013f37bf9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   724d02dce7a0d       coredns-7c65d6cfc9-5frmm
	347597ebf9b20       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   b08b772dab41d       kube-proxy-mxvxr
	6cf899810e161       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   b2990036962da       kindnet-g94k6
	aa8ecc81d0af2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago      Exited              etcd                      0                   f789f882a4d3c       etcd-ha-033260
	2435a21a0f6f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      11 minutes ago      Exited              kube-scheduler            0                   fd27dbf29ee9b       kube-scheduler-ha-033260
	
	
	==> coredns [2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7] <==
	[INFO] 10.244.1.2:52635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028646205s
	[INFO] 10.244.1.2:41853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176274s
	[INFO] 10.244.1.2:35962 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170835s
	[INFO] 10.244.0.4:41550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130972s
	[INFO] 10.244.0.4:32938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173381s
	[INFO] 10.244.0.4:56409 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073902s
	[INFO] 10.244.2.2:58163 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268677s
	[INFO] 10.244.2.2:36365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010796s
	[INFO] 10.244.2.2:56656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115088s
	[INFO] 10.244.2.2:56306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139171s
	[INFO] 10.244.1.2:35824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200215s
	[INFO] 10.244.1.2:55897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096777s
	[INFO] 10.244.1.2:41692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109849s
	[INFO] 10.244.0.4:40290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106794s
	[INFO] 10.244.0.4:46779 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132069s
	[INFO] 10.244.1.2:51125 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201243s
	[INFO] 10.244.1.2:54698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184568s
	[INFO] 10.244.0.4:53882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193917s
	[INFO] 10.244.0.4:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121126s
	[INFO] 10.244.2.2:58238 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117978s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0] <==
	[INFO] 10.244.0.4:53761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001344827s
	[INFO] 10.244.0.4:59481 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051804s
	[INFO] 10.244.2.2:39523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137336s
	[INFO] 10.244.2.2:35477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002190323s
	[INFO] 10.244.2.2:37515 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001525471s
	[INFO] 10.244.2.2:34201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119381s
	[INFO] 10.244.1.2:42886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230949s
	[INFO] 10.244.0.4:43156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079033s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010674s
	[INFO] 10.244.2.2:47730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245903s
	[INFO] 10.244.2.2:54559 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165285s
	[INFO] 10.244.2.2:56225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115859s
	[INFO] 10.244.2.2:54334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001069s
	[INFO] 10.244.1.2:43809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130742s
	[INFO] 10.244.1.2:56685 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199555s
	[INFO] 10.244.0.4:44188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154269s
	[INFO] 10.244.0.4:56530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138351s
	[INFO] 10.244.2.2:34814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138709s
	[INFO] 10.244.2.2:49549 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124443s
	[INFO] 10.244.2.2:35669 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100712s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1725&timeout=8m58s&timeoutSeconds=538&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1725&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1722&timeout=6m53s&timeoutSeconds=413&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[461234772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:24.420) (total time: 10001ms):
	Trace[461234772]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:34.421)
	Trace[461234772]: [10.001239871s] [10.001239871s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1495932049]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:24.566) (total time: 10001ms):
	Trace[1495932049]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:34.568)
	Trace[1495932049]: [10.00173189s] [10.00173189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1696823593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:23.560) (total time: 10001ms):
	Trace[1696823593]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:33.562)
	Trace[1696823593]: [10.001227226s] [10.001227226s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1783554872]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:26.582) (total time: 10001ms):
	Trace[1783554872]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:36.583)
	Trace[1783554872]: [10.001386462s] [10.001386462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:55110->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:55110->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.651623] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058580] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170861] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.144465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.293344] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.055212] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.356595] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.065791] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.315036] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.090322] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 11:12] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.137075] kauditd_printk_skb: 38 callbacks suppressed
	[Sep30 11:13] kauditd_printk_skb: 24 callbacks suppressed
	[Sep30 11:19] kauditd_printk_skb: 1 callbacks suppressed
	[Sep30 11:22] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.171159] systemd-fstab-generator[3584]: Ignoring "noauto" option for root device
	[  +0.178992] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +0.163482] systemd-fstab-generator[3610]: Ignoring "noauto" option for root device
	[  +0.302477] systemd-fstab-generator[3638]: Ignoring "noauto" option for root device
	[  +1.773800] systemd-fstab-generator[3737]: Ignoring "noauto" option for root device
	[  +6.561875] kauditd_printk_skb: 122 callbacks suppressed
	[ +11.942924] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.334690] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699] <==
	{"level":"warn","ts":"2024-09-30T11:22:50.381263Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-30T11:22:50.625433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:50.625504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	{"level":"warn","ts":"2024-09-30T11:22:50.881859Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:51.382006Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:51.882199Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-30T11:22:52.325729Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:52.325788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:52.325803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:52.325816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:52.325823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	{"level":"warn","ts":"2024-09-30T11:22:52.382974Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:52.883272Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:53.375924Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-30T11:22:53.376018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.001185827s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-30T11:22:53.376042Z","caller":"traceutil/trace.go:171","msg":"trace[951823920] range","detail":"{range_begin:; range_end:; }","duration":"7.001240597s","start":"2024-09-30T11:22:46.374792Z","end":"2024-09-30T11:22:53.376033Z","steps":["trace[951823920] 'agreement among raft nodes before linearized reading'  (duration: 7.001183883s)"],"step_count":1}
	{"level":"error","ts":"2024-09-30T11:22:53.376080Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"warn","ts":"2024-09-30T11:22:53.396827Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:ha-033260 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-30T11:22:53.406496Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1e3ede80da48fb5a","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:22:53.413780Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1e3ede80da48fb5a","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:22:53.418079Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-30T11:22:53.419227Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: no route to host"}
	
	
	==> etcd [aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8] <==
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-30T11:20:36.370065Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:20:36.370167Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T11:20:36.370398Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-30T11:20:36.370773Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.370856Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.370907Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371048Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371109Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371265Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371274Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371355Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371462Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371746Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371825Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371896Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371926Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.375304Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"warn","ts":"2024-09-30T11:20:36.375330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.79480238s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-30T11:20:36.375455Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-09-30T11:20:36.375483Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-033260","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	{"level":"info","ts":"2024-09-30T11:20:36.375455Z","caller":"traceutil/trace.go:171","msg":"trace[311727536] range","detail":"{range_begin:; range_end:; }","duration":"8.794947939s","start":"2024-09-30T11:20:27.580499Z","end":"2024-09-30T11:20:36.375447Z","steps":["trace[311727536] 'agreement among raft nodes before linearized reading'  (duration: 8.794800052s)"],"step_count":1}
	
	
	==> kernel <==
	 11:22:53 up 11 min,  0 users,  load average: 0.38, 0.44, 0.24
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346] <==
	I0930 11:20:07.854375       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:07.854476       1 main.go:299] handling current node
	I0930 11:20:07.854505       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:07.854522       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:07.854768       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:07.854802       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:07.854864       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:07.854882       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:20:17.862189       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:17.862294       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:20:17.862450       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:17.862475       1 main.go:299] handling current node
	I0930 11:20:17.862496       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:17.862512       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:17.862579       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:17.862598       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:27.860296       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:27.860570       1 main.go:299] handling current node
	I0930 11:20:27.860675       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:27.860705       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:27.860964       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:27.860987       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:27.861050       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:27.861068       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	E0930 11:20:36.321163       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22] <==
	I0930 11:22:18.408919       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0930 11:22:18.411055       1 main.go:139] hostIP = 192.168.39.249
	podIP = 192.168.39.249
	I0930 11:22:18.411374       1 main.go:148] setting mtu 1500 for CNI 
	I0930 11:22:18.412734       1 main.go:178] kindnetd IP family: "ipv4"
	I0930 11:22:18.412859       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0930 11:22:19.102768       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	W0930 11:22:29.112365       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0930 11:22:29.113034       1 trace.go:236] Trace[1532943992]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (30-Sep-2024 11:22:19.102) (total time: 10009ms):
	Trace[1532943992]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10009ms (11:22:29.112)
	Trace[1532943992]: [10.009569146s] [10.009569146s] END
	E0930 11:22:29.113093       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0930 11:22:39.966399       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.15:34582->10.96.0.1:443: read: connection reset by peer
	E0930 11:22:39.966874       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.15:34582->10.96.0.1:443: read: connection reset by peer
	W0930 11:22:42.909485       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	E0930 11:22:42.909698       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	W0930 11:22:52.125278       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	E0930 11:22:52.125345       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3] <==
	I0930 11:22:18.351040       1 options.go:228] external host was not specified, using 192.168.39.249
	I0930 11:22:18.358932       1 server.go:142] Version: v1.31.1
	I0930 11:22:18.358991       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:22:18.951862       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 11:22:18.952819       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:22:18.958733       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 11:22:18.958775       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 11:22:18.959115       1 instance.go:232] Using reconciler: lease
	W0930 11:22:38.930134       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0930 11:22:38.930367       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0930 11:22:38.959894       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0] <==
	I0930 11:22:18.773138       1 serving.go:386] Generated self-signed cert in-memory
	I0930 11:22:19.094923       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 11:22:19.094962       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:22:19.097540       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 11:22:19.097832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:22:19.097963       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:22:19.098124       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0930 11:22:39.965712       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030] <==
	I0930 11:22:19.122779       1 server_linux.go:66] "Using iptables proxy"
	E0930 11:22:19.157903       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:22:19.183052       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:22:19.549933       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:22.621776       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:25.694800       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:31.839062       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:44.126368       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	
	
	==> kube-proxy [347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c] <==
	E0930 11:19:18.749153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:18.749098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:18.749189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:21.949265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:21.949358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:25.021001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:25.021082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:25.021248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:25.021281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:28.093023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:28.093160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:33.661953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:33.662031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:36.734817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:36.734886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:36.735220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:36.735300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:52.094043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:52.094172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:01.309076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:01.309151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:01.309250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:01.309280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:32.031340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:32.032079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2] <==
	E0930 11:15:14.688017       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c071322f-794b-4d6f-a33a-92077352ef5d(kube-system/kindnet-kb2cp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kb2cp"
	E0930 11:15:14.688032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-kb2cp"
	I0930 11:15:14.688047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.701899       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nbts6" node="ha-033260-m04"
	E0930 11:15:14.702003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-nbts6"
	E0930 11:15:14.702565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:15:14.705542       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2de7434-03f1-4bbc-ab62-3101483908c1(kube-system/kube-proxy-cr58q) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-cr58q"
	E0930 11:15:14.705602       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-cr58q"
	I0930 11:15:14.705671       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:20:20.605503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0930 11:20:22.554431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:23.502921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0930 11:20:23.897499       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0930 11:20:25.058394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0930 11:20:25.967971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:26.690085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:26.719785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0930 11:20:28.017485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0930 11:20:28.510153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0930 11:20:28.970490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:30.680510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0930 11:20:30.842087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0930 11:20:32.435288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0930 11:20:34.062903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0930 11:20:36.288434       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801] <==
	E0930 11:22:47.146722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:47.351451       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:47.351518       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.249:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:47.380395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:47.380458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.249:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:47.462180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:47.462250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.133417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.133538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.249:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.385125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.385238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.249:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.492946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.493080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.797564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.797767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.836125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.836250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.876443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.876506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.938087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.938223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:49.872885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:49.873029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:50.083149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:50.083283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 30 11:22:44 ha-033260 kubelet[1307]: W0930 11:22:44.125186    1307 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1765": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125716    1307 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1765\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125576    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: I0930 11:22:44.125406    1307 status_manager.go:851] "Failed to get status for pod" podUID="4ee6f0cb154890b5d1bf6173256957d4" pod="kube-system/etcd-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125276    1307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: I0930 11:22:46.247863    1307 scope.go:117] "RemoveContainer" containerID="9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: E0930 11:22:46.248393    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-033260_kube-system(43955f8cf95999657a88952585c93768)\"" pod="kube-system/kube-controller-manager-ha-033260" podUID="43955f8cf95999657a88952585c93768"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: I0930 11:22:46.864719    1307 scope.go:117] "RemoveContainer" containerID="5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: E0930 11:22:46.864888    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-033260_kube-system(6c1732ebd63e52d0c6ac6d9cd648cff5)\"" pod="kube-system/kube-apiserver-ha-033260" podUID="6c1732ebd63e52d0c6ac6d9cd648cff5"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: W0930 11:22:47.197121    1307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197334    1307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197299    1307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-033260.17fa018ec87563f8  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-033260,UID:6c1732ebd63e52d0c6ac6d9cd648cff5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-033260,},FirstTimestamp:2024-09-30 11:18:39.81012684 +0000 UTC m=+400.493343197,LastTimestamp:2024-09-30 11:18:39.81012684 +0000 UTC m=+400.493343197,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:
nil,ReportingController:kubelet,ReportingInstance:ha-033260,}"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: I0930 11:22:47.197407    1307 status_manager.go:851] "Failed to get status for pod" podUID="734999721cb3f48c24354599fcaf3db2" pod="kube-system/kube-scheduler-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197256    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: W0930 11:22:47.197144    1307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1681": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197942    1307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1681\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:49 ha-033260 kubelet[1307]: E0930 11:22:49.688509    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695369688149390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:22:49 ha-033260 kubelet[1307]: E0930 11:22:49.688533    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695369688149390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: E0930 11:22:50.269074    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: E0930 11:22:50.269229    1307 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: I0930 11:22:50.269083    1307 status_manager.go:851] "Failed to get status for pod" podUID="6c1732ebd63e52d0c6ac6d9cd648cff5" pod="kube-system/kube-apiserver-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:52 ha-033260 kubelet[1307]: I0930 11:22:52.442046    1307 scope.go:117] "RemoveContainer" containerID="cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda"
	Sep 30 11:22:52 ha-033260 kubelet[1307]: E0930 11:22:52.442202    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(964381ab-f2ac-4361-a7e0-5212fff5e26e)\"" pod="kube-system/storage-provisioner" podUID="964381ab-f2ac-4361-a7e0-5212fff5e26e"
	Sep 30 11:22:53 ha-033260 kubelet[1307]: E0930 11:22:53.341056    1307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Sep 30 11:22:53 ha-033260 kubelet[1307]: I0930 11:22:53.341059    1307 status_manager.go:851] "Failed to get status for pod" podUID="260e385d-9e17-4af8-a854-8683afb714c4" pod="kube-system/kindnet-g94k6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-g94k6\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:22:53.027083   33911 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19734-3842/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260: exit status 2 (234.893517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-033260" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (2.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-033260" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-033260\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-033260\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-033260\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.238\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.104\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":fa
lse,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountM
Size\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260: exit status 2 (217.451914ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.355838874s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260 -v=7                                                           | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-033260 -v=7                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	| node    | ha-033260 node delete m03 -v=7                                                   | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:20:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:20:35.412602   33043 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:20:35.412849   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.412858   33043 out.go:358] Setting ErrFile to fd 2...
	I0930 11:20:35.412863   33043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:20:35.413024   33043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:20:35.413552   33043 out.go:352] Setting JSON to false
	I0930 11:20:35.414491   33043 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3782,"bootTime":1727691453,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:20:35.414596   33043 start.go:139] virtualization: kvm guest
	I0930 11:20:35.416608   33043 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:20:35.417763   33043 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:20:35.417777   33043 notify.go:220] Checking for updates...
	I0930 11:20:35.420438   33043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:20:35.421852   33043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:20:35.423268   33043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:20:35.424519   33043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:20:35.425736   33043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:20:35.427423   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:35.427536   33043 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:20:35.428064   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.428107   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.443112   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0930 11:20:35.443682   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.444204   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.444222   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.444550   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.444728   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.482622   33043 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:20:35.483910   33043 start.go:297] selected driver: kvm2
	I0930 11:20:35.483927   33043 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.484109   33043 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:20:35.484423   33043 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.484521   33043 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:20:35.500176   33043 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:20:35.500994   33043 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:20:35.501027   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:20:35.501074   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:20:35.501131   33043 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:20:35.501263   33043 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:20:35.503184   33043 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:20:35.504511   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:20:35.504563   33043 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:20:35.504573   33043 cache.go:56] Caching tarball of preloaded images
	I0930 11:20:35.504731   33043 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:20:35.504748   33043 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:20:35.504904   33043 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:20:35.505134   33043 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:20:35.505183   33043 start.go:364] duration metric: took 27.274µs to acquireMachinesLock for "ha-033260"
	I0930 11:20:35.505203   33043 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:20:35.505236   33043 fix.go:54] fixHost starting: 
	I0930 11:20:35.505507   33043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:20:35.505539   33043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:20:35.520330   33043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I0930 11:20:35.520763   33043 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:20:35.521246   33043 main.go:141] libmachine: Using API Version  1
	I0930 11:20:35.521267   33043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:20:35.521605   33043 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:20:35.521835   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.521965   33043 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:20:35.523567   33043 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:20:35.523602   33043 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:20:35.525750   33043 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:20:35.527061   33043 machine.go:93] provisionDockerMachine start ...
	I0930 11:20:35.527088   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:20:35.527326   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.530036   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530579   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.530600   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.530780   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.530958   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531111   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.531203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.531336   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.531561   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.531576   33043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:20:35.649365   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.649400   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649690   33043 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:20:35.649710   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.649919   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.652623   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653056   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.653103   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.653299   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.653488   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653688   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.653834   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.653997   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.654241   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.654260   33043 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:20:35.785013   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:20:35.785047   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.788437   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.788960   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.788993   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.789200   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.789404   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789576   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.789719   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.789879   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:35.790046   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:35.790061   33043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:20:35.902798   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:20:35.902835   33043 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:20:35.902868   33043 buildroot.go:174] setting up certificates
	I0930 11:20:35.902885   33043 provision.go:84] configureAuth start
	I0930 11:20:35.902905   33043 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:20:35.903213   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:20:35.905874   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906221   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.906243   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.906402   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.908695   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909090   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.909113   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.909309   33043 provision.go:143] copyHostCerts
	I0930 11:20:35.909340   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909394   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:20:35.909406   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:20:35.909486   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:20:35.909601   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909636   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:20:35.909647   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:20:35.909686   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:20:35.909766   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909790   33043 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:20:35.909794   33043 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:20:35.909825   33043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:20:35.909903   33043 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:20:35.980635   33043 provision.go:177] copyRemoteCerts
	I0930 11:20:35.980685   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:20:35.980706   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:35.983637   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.983980   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:35.983998   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:35.984309   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:35.984502   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:35.984684   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:35.984848   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:20:36.072953   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:20:36.073023   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:20:36.102423   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:20:36.102509   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:20:36.135815   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:20:36.135913   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:20:36.166508   33043 provision.go:87] duration metric: took 263.6024ms to configureAuth
	I0930 11:20:36.166535   33043 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:20:36.166819   33043 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:20:36.166934   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:20:36.169482   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.169896   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:20:36.169922   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:20:36.170125   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:20:36.170342   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170514   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:20:36.170642   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:20:36.170792   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:20:36.170996   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:20:36.171017   33043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:22:06.983121   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:22:06.983146   33043 machine.go:96] duration metric: took 1m31.456067098s to provisionDockerMachine
	I0930 11:22:06.983157   33043 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:22:06.983167   33043 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:22:06.983186   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:06.983540   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:22:06.983587   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:06.986877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987470   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:06.987488   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:06.987723   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:06.987912   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:06.988044   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:06.988157   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.073758   33043 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:22:07.078469   33043 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:22:07.078512   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:22:07.078605   33043 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:22:07.078699   33043 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:22:07.078713   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:22:07.078804   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:22:07.089555   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:07.116194   33043 start.go:296] duration metric: took 133.023032ms for postStartSetup
	I0930 11:22:07.116254   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.116551   33043 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:22:07.116577   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.119461   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.119823   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.119858   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.120010   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.120203   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.120359   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.120470   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	W0930 11:22:07.204626   33043 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0930 11:22:07.204654   33043 fix.go:56] duration metric: took 1m31.699418607s for fixHost
	I0930 11:22:07.204673   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.207768   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208205   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.208236   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.208426   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.208670   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208815   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.208920   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.209074   33043 main.go:141] libmachine: Using SSH client type: native
	I0930 11:22:07.209303   33043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:22:07.209317   33043 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:22:07.318615   33043 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695327.281322937
	
	I0930 11:22:07.318635   33043 fix.go:216] guest clock: 1727695327.281322937
	I0930 11:22:07.318652   33043 fix.go:229] Guest: 2024-09-30 11:22:07.281322937 +0000 UTC Remote: 2024-09-30 11:22:07.204660834 +0000 UTC m=+91.828672682 (delta=76.662103ms)
	I0930 11:22:07.318687   33043 fix.go:200] guest clock delta is within tolerance: 76.662103ms
	I0930 11:22:07.318695   33043 start.go:83] releasing machines lock for "ha-033260", held for 1m31.813499324s
	I0930 11:22:07.318717   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.318982   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:07.321877   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322412   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.322444   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.322594   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323100   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323285   33043 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:22:07.323407   33043 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:22:07.323451   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.323488   33043 ssh_runner.go:195] Run: cat /version.json
	I0930 11:22:07.323513   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:22:07.326064   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326202   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326521   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326548   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326576   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:07.326591   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:07.326637   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326826   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.326854   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:22:07.326968   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327118   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:22:07.327178   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.327254   33043 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:22:07.327385   33043 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:22:07.415864   33043 ssh_runner.go:195] Run: systemctl --version
	I0930 11:22:07.451247   33043 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:22:07.632639   33043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:22:07.641688   33043 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:22:07.641764   33043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:22:07.651983   33043 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:22:07.652031   33043 start.go:495] detecting cgroup driver to use...
	I0930 11:22:07.652103   33043 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:22:07.669168   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:22:07.684823   33043 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:22:07.684912   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:22:07.701483   33043 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:22:07.716518   33043 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:22:07.896967   33043 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:22:08.050310   33043 docker.go:233] disabling docker service ...
	I0930 11:22:08.050371   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:22:08.068482   33043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:22:08.084459   33043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:22:08.236128   33043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:22:08.390802   33043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:22:08.406104   33043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:22:08.427375   33043 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:22:08.427446   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.438743   33043 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:22:08.438847   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.452067   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.463557   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.475079   33043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:22:08.487336   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.498829   33043 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.511516   33043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:22:08.523240   33043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:22:08.533544   33043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:22:08.544108   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:08.698933   33043 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:22:09.935253   33043 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.236281542s)
	I0930 11:22:09.935282   33043 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:22:09.935334   33043 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:22:09.940570   33043 start.go:563] Will wait 60s for crictl version
	I0930 11:22:09.940624   33043 ssh_runner.go:195] Run: which crictl
	I0930 11:22:09.945362   33043 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:22:09.989303   33043 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:22:09.989390   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.021074   33043 ssh_runner.go:195] Run: crio --version
	I0930 11:22:10.054999   33043 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:22:10.056435   33043 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:22:10.059297   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.059696   33043 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:22:10.059727   33043 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:22:10.060000   33043 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:22:10.065633   33043 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:22:10.065825   33043 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:22:10.065888   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.114243   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.114265   33043 crio.go:433] Images already preloaded, skipping extraction
	I0930 11:22:10.114317   33043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:22:10.150653   33043 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:22:10.150674   33043 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:22:10.150709   33043 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:22:10.150850   33043 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:22:10.150941   33043 ssh_runner.go:195] Run: crio config
	I0930 11:22:10.206136   33043 cni.go:84] Creating CNI manager for ""
	I0930 11:22:10.206155   33043 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:22:10.206167   33043 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:22:10.206190   33043 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:22:10.206332   33043 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:22:10.206353   33043 kube-vip.go:115] generating kube-vip config ...
	I0930 11:22:10.206392   33043 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:22:10.219053   33043 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:22:10.219173   33043 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:22:10.219254   33043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:22:10.229908   33043 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:22:10.230004   33043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:22:10.240121   33043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:22:10.258330   33043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:22:10.275729   33043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:22:10.294239   33043 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:22:10.312810   33043 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:22:10.318284   33043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:22:10.474551   33043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:22:10.491027   33043 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:22:10.491051   33043 certs.go:194] generating shared ca certs ...
	I0930 11:22:10.491069   33043 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.491243   33043 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:22:10.491283   33043 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:22:10.491302   33043 certs.go:256] generating profile certs ...
	I0930 11:22:10.491378   33043 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:22:10.491404   33043 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:22:10.491428   33043 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:22:10.563349   33043 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 ...
	I0930 11:22:10.563384   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8: {Name:mkee749054ef5d747ecd6803933a55d7df9028fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563569   33043 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 ...
	I0930 11:22:10.563581   33043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8: {Name:mk9e9a7e147c3768475898ec896a945ed1a2ca5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:22:10.563657   33043 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:22:10.563846   33043 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:22:10.563993   33043 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:22:10.564009   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:22:10.564024   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:22:10.564040   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:22:10.564063   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:22:10.564079   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:22:10.564094   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:22:10.564108   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:22:10.564123   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:22:10.564204   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:22:10.564237   33043 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:22:10.564251   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:22:10.564279   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:22:10.564308   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:22:10.564350   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:22:10.564409   33043 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:22:10.564444   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.564467   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.564488   33043 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.565081   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:22:10.592675   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:22:10.618318   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:22:10.644512   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:22:10.671272   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:22:10.697564   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:22:10.722738   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:22:10.749628   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:22:10.776815   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:22:10.803425   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:22:10.831267   33043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:22:10.857397   33043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:22:10.875093   33043 ssh_runner.go:195] Run: openssl version
	I0930 11:22:10.881398   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:22:10.892677   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897320   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.897366   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:22:10.903164   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:22:10.912882   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:22:10.923908   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928941   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.928987   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:22:10.935759   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:22:10.946855   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:22:10.958480   33043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963160   33043 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.963215   33043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:22:10.969693   33043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:22:10.979808   33043 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:22:10.984752   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:22:10.990728   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:22:10.996688   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:22:11.002573   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:22:11.008376   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:22:11.014247   33043 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:22:11.020178   33043 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:22:11.020295   33043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:22:11.020338   33043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:22:11.063287   33043 cri.go:89] found id: "67ee9c49babe93d74d8ee81ea2f17248f722c6211abed7e9723015bda428c4e0"
	I0930 11:22:11.063310   33043 cri.go:89] found id: "e591b4f157ddf0eb6b48bdb31431c92024f32bbe7aa2f96293514fffeed045fe"
	I0930 11:22:11.063314   33043 cri.go:89] found id: "9dc9be1c78f6ce470cf1031b617b8d94b60138c3c1bd738c2bafa9f07db57573"
	I0930 11:22:11.063317   33043 cri.go:89] found id: "5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c"
	I0930 11:22:11.063320   33043 cri.go:89] found id: "856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0"
	I0930 11:22:11.063323   33043 cri.go:89] found id: "2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7"
	I0930 11:22:11.063325   33043 cri.go:89] found id: "347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c"
	I0930 11:22:11.063328   33043 cri.go:89] found id: "6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346"
	I0930 11:22:11.063330   33043 cri.go:89] found id: "7a9e01197e5c6b93ad407fd55a87997ad971da8f01b419f28afe151cb9b7dfee"
	I0930 11:22:11.063334   33043 cri.go:89] found id: "aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8"
	I0930 11:22:11.063341   33043 cri.go:89] found id: "e62c0a6cc031f8bf59e56d14e1e873c2b041e7d8e0b9bc574a57f461d7b2f489"
	I0930 11:22:11.063343   33043 cri.go:89] found id: "2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2"
	I0930 11:22:11.063346   33043 cri.go:89] found id: "cd2027f0a04e15268a463edfaacc1090c2975c49f972af0729d97b0d16cf23ac"
	I0930 11:22:11.063349   33043 cri.go:89] found id: ""
	I0930 11:22:11.063386   33043 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.634080675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695375634051381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e88307a3-1c13-45ad-9dfa-9e8522832bb5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.634566698Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd79cc45-fc07-4eb8-b2f1-067cf98b6260 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.634704352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd79cc45-fc07-4eb8-b2f1-067cf98b6260 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.635117663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd79cc45-fc07-4eb8-b2f1-067cf98b6260 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.679141035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0257212-4d76-44f9-9fa8-482dc2670036 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.679236667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0257212-4d76-44f9-9fa8-482dc2670036 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.680528732Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=669ce173-edc5-46cb-a9df-fb94704d6268 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.681226810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695375681201823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=669ce173-edc5-46cb-a9df-fb94704d6268 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.681728404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc7cbb72-d4d6-4793-b32c-4bc8b797e123 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.681804509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc7cbb72-d4d6-4793-b32c-4bc8b797e123 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.682178837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc7cbb72-d4d6-4793-b32c-4bc8b797e123 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.724293887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=911ec39b-3da6-40fe-8ca2-aca65f65e383 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.724440093Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=911ec39b-3da6-40fe-8ca2-aca65f65e383 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.727175637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64ee2991-0dd3-40c7-9abb-ea9401eb606e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.727690722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695375727599388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64ee2991-0dd3-40c7-9abb-ea9401eb606e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.728507785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07dfbc1d-88d1-47f7-95ad-cb39f4ab1f52 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.728590576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07dfbc1d-88d1-47f7-95ad-cb39f4ab1f52 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.729022133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07dfbc1d-88d1-47f7-95ad-cb39f4ab1f52 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.770397115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a173c0cf-608d-4e3e-8637-38ed7cec09a8 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.770489505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a173c0cf-608d-4e3e-8637-38ed7cec09a8 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.771356988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27ec52d7-9b3e-4172-a8dc-05492aae35a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.771865321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695375771821019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27ec52d7-9b3e-4172-a8dc-05492aae35a7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.772663938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=447feec5-db34-4cc1-b68a-a9ba2b0bb815 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.772719709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=447feec5-db34-4cc1-b68a-a9ba2b0bb815 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:22:55 ha-033260 crio[3649]: time="2024-09-30 11:22:55.773089726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:710727c61a4701ae19d963b3218970199978ca4c0dc05565b1a13f12a56187f1,PodSandboxId:b6882d75e972529516b72b72c3413e6673f456ea322c262baac658bab15b6346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695370808210292,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553822716c2e014ddc358f51097c5ac39af94834f5955554be15d32aab732350,PodSandboxId:1a455b42d02b7e4bf0d4b5469f7896e45738f48454d92368ffc48e8112559da5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695348894333619,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda,PodSandboxId:c51802eecf1d476d9b69b697cce2325b67bfe0632b2a17643381855d319350de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695337750212205,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22,PodSandboxId:efa9ebbfa94e85c63b01edb40e5fd0623c2a8d0a76c9f96ab9068de0833ea3c4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695337695370740,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e,PodSandboxId:51f8166139d91ec0a3f86cf20916665afffa2f4fb9728231bf52e4cf31a99437,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337745515164,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3,PodSandboxId:72d470d360a7c73fa0ee76ae6299a07f0a865593b74d9c3d1b956dc87561d5f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695337667676932,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io
.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac,PodSandboxId:d42975c48cf046f7e3683851d274d1557b2e8dd51368dce108a89808ba365c66,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695337613353446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{
\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030,PodSandboxId:0d34e22a6894a9ec73599a566768cb0bc518a60afd21dd403e95adcb70259721,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695337432427260,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699,PodSandboxId:703c1a1dd3cad8a24a7b2fa4f2fe41fe44970038157a07c36fb96dca55479d5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695337335705971,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0,PodSandboxId:9591ef5a18733c1d3a6063fc7158a81a8008448a3c23ec8799388ffc87b9fa41,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695337394440337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801,PodSandboxId:36d814d7ad35f90f2c2466dc73b448830822280709924e38e59aa44c8f158e4b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695337384610497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb
3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1585ef6941bab813009eb651b2f69391d2361f1a6f2338804fe516c33e151c,PodSandboxId:2bd722c6afa6367f145f4ebdb6423ebed6288d2086382e9274c56212f86c24b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1727695126198946074,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc91a2a25badfe2ca88893e1f6ac643a,},Annotations:map[
string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:970aed3b1f96bb97e72898cd783225ac0006aebe6f054dd28143f597e8ea86a0,PodSandboxId:e5a4e140afd6ad949b07014c6b859f1ea789a97d9dade260abb5ed29f0fe7b50,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727694880474461209,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0,PodSandboxId:ee2a6eb69b10a669fc5cb46da152a0f46e479a701be48d5b4fc93794c488bf00,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738700577527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7,PodSandboxId:724d02dce7a0d8d3302d91b914313f6170fad7bb463f35a89efbc2d45ea83743,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727694738606990829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kuber
netes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346,PodSandboxId:b2990036962da9c3b67fe8a8e6276369a8c485498aa25e0a3f2b2162e2dbc3a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727694726648221340,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c,PodSandboxId:b08b772dab41d0de4aecc929afc68796f20ba155acf12d5995ad1312e342f984,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727694726649938090,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8,PodSandboxId:f789f882a4d3cf2a06f794fdbbf602769321846425b0200641d34768019ad655,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_EXITED,CreatedAt:1727694713376781779,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2,PodSandboxId:fd27dbf29ee9b8b28ca29c668a35a6290fc7fe7c3941a9c701561b0d3d113ef8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:17276947132
98082985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=447feec5-db34-4cc1-b68a-a9ba2b0bb815 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	710727c61a470       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 seconds ago       Running             busybox                   1                   b6882d75e9725       busybox-7dff88458-nbhwc
	553822716c2e0       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      26 seconds ago      Running             kube-vip                  0                   1a455b42d02b7       kube-vip-ha-033260
	cb85b01ef51db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      38 seconds ago      Exited              storage-provisioner       3                   c51802eecf1d4       storage-provisioner
	8702bda4a75f9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      38 seconds ago      Running             coredns                   1                   51f8166139d91       coredns-7c65d6cfc9-5frmm
	d71ac252dcd80       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      38 seconds ago      Running             kindnet-cni               1                   efa9ebbfa94e8       kindnet-g94k6
	5a83e9ce6a32a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      38 seconds ago      Exited              kube-apiserver            2                   72d470d360a7c       kube-apiserver-ha-033260
	bc5f830d46b01       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      38 seconds ago      Running             coredns                   1                   d42975c48cf04       coredns-7c65d6cfc9-kt87v
	0a2a1de86feca       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      38 seconds ago      Running             kube-proxy                1                   0d34e22a6894a       kube-proxy-mxvxr
	9ce2338b980fa       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      38 seconds ago      Exited              kube-controller-manager   1                   9591ef5a18733       kube-controller-manager-ha-033260
	7ddf3925913de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      38 seconds ago      Running             kube-scheduler            1                   36d814d7ad35f       kube-scheduler-ha-033260
	6209266b6bd43       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      38 seconds ago      Running             etcd                      1                   703c1a1dd3cad       etcd-ha-033260
	5d1585ef6941b       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Exited              kube-vip                  1                   2bd722c6afa63       kube-vip-ha-033260
	970aed3b1f96b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   e5a4e140afd6a       busybox-7dff88458-nbhwc
	856f46390ed07       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   ee2a6eb69b10a       coredns-7c65d6cfc9-kt87v
	2aac013f37bf9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   724d02dce7a0d       coredns-7c65d6cfc9-5frmm
	347597ebf9b20       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   b08b772dab41d       kube-proxy-mxvxr
	6cf899810e161       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   b2990036962da       kindnet-g94k6
	aa8ecc81d0af2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago      Exited              etcd                      0                   f789f882a4d3c       etcd-ha-033260
	2435a21a0f6f8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      11 minutes ago      Exited              kube-scheduler            0                   fd27dbf29ee9b       kube-scheduler-ha-033260
	
	
	==> coredns [2aac013f37bf973feaaafcbd51d773e3d3ff3a344806e961369cd9a2982d61b7] <==
	[INFO] 10.244.1.2:52635 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028646205s
	[INFO] 10.244.1.2:41853 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000176274s
	[INFO] 10.244.1.2:35962 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170835s
	[INFO] 10.244.0.4:41550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130972s
	[INFO] 10.244.0.4:32938 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173381s
	[INFO] 10.244.0.4:56409 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073902s
	[INFO] 10.244.2.2:58163 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000268677s
	[INFO] 10.244.2.2:36365 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010796s
	[INFO] 10.244.2.2:56656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115088s
	[INFO] 10.244.2.2:56306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139171s
	[INFO] 10.244.1.2:35824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200215s
	[INFO] 10.244.1.2:55897 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096777s
	[INFO] 10.244.1.2:41692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109849s
	[INFO] 10.244.0.4:40290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106794s
	[INFO] 10.244.0.4:46779 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132069s
	[INFO] 10.244.1.2:51125 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201243s
	[INFO] 10.244.1.2:54698 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184568s
	[INFO] 10.244.0.4:53882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193917s
	[INFO] 10.244.0.4:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121126s
	[INFO] 10.244.2.2:58238 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117978s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [856f46390ed078527dc9fe5b83277da5845af344640c03eadebd71a1879f67b0] <==
	[INFO] 10.244.0.4:53761 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001344827s
	[INFO] 10.244.0.4:59481 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051804s
	[INFO] 10.244.2.2:39523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137336s
	[INFO] 10.244.2.2:35477 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002190323s
	[INFO] 10.244.2.2:37515 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001525471s
	[INFO] 10.244.2.2:34201 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119381s
	[INFO] 10.244.1.2:42886 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000230949s
	[INFO] 10.244.0.4:43156 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079033s
	[INFO] 10.244.0.4:55330 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010674s
	[INFO] 10.244.2.2:47730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245903s
	[INFO] 10.244.2.2:54559 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165285s
	[INFO] 10.244.2.2:56225 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115859s
	[INFO] 10.244.2.2:54334 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001069s
	[INFO] 10.244.1.2:43809 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130742s
	[INFO] 10.244.1.2:56685 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199555s
	[INFO] 10.244.0.4:44188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154269s
	[INFO] 10.244.0.4:56530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000138351s
	[INFO] 10.244.2.2:34814 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138709s
	[INFO] 10.244.2.2:49549 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124443s
	[INFO] 10.244.2.2:35669 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000100712s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1725&timeout=8m58s&timeoutSeconds=538&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1725&timeout=8m37s&timeoutSeconds=517&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1722&timeout=6m53s&timeoutSeconds=413&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8702bda4a75f999cae13a6a085096a075790fad4020353a4b586d23faed2c95e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[461234772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:24.420) (total time: 10001ms):
	Trace[461234772]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:34.421)
	Trace[461234772]: [10.001239871s] [10.001239871s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1495932049]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:24.566) (total time: 10001ms):
	Trace[1495932049]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:34.568)
	Trace[1495932049]: [10.00173189s] [10.00173189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [bc5f830d46b0161829836bf629be0030e2f1a56cf9dac1fd217f0b15d2f3e4ac] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1696823593]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:23.560) (total time: 10001ms):
	Trace[1696823593]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:33.562)
	Trace[1696823593]: [10.001227226s] [10.001227226s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1783554872]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:22:26.582) (total time: 10001ms):
	Trace[1783554872]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:36.583)
	Trace[1783554872]: [10.001386462s] [10.001386462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:55110->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:55110->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.651623] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.057096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058580] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170861] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.144465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.293344] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.055212] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.356595] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.065791] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.315036] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.090322] kauditd_printk_skb: 79 callbacks suppressed
	[Sep30 11:12] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.137075] kauditd_printk_skb: 38 callbacks suppressed
	[Sep30 11:13] kauditd_printk_skb: 24 callbacks suppressed
	[Sep30 11:19] kauditd_printk_skb: 1 callbacks suppressed
	[Sep30 11:22] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.171159] systemd-fstab-generator[3584]: Ignoring "noauto" option for root device
	[  +0.178992] systemd-fstab-generator[3598]: Ignoring "noauto" option for root device
	[  +0.163482] systemd-fstab-generator[3610]: Ignoring "noauto" option for root device
	[  +0.302477] systemd-fstab-generator[3638]: Ignoring "noauto" option for root device
	[  +1.773800] systemd-fstab-generator[3737]: Ignoring "noauto" option for root device
	[  +6.561875] kauditd_printk_skb: 122 callbacks suppressed
	[ +11.942924] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.334690] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [6209266b6bd4310eebf52ca1fd300a56e5f4d4c55c7befcaf5b3d961aab44699] <==
	{"level":"info","ts":"2024-09-30T11:22:52.325788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:52.325803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:52.325816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:52.325823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	{"level":"warn","ts":"2024-09-30T11:22:52.382974Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:52.883272Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15368413068455007494,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-30T11:22:53.375924Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-09-30T11:22:53.376018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.001185827s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-09-30T11:22:53.376042Z","caller":"traceutil/trace.go:171","msg":"trace[951823920] range","detail":"{range_begin:; range_end:; }","duration":"7.001240597s","start":"2024-09-30T11:22:46.374792Z","end":"2024-09-30T11:22:53.376033Z","steps":["trace[951823920] 'agreement among raft nodes before linearized reading'  (duration: 7.001183883s)"],"step_count":1}
	{"level":"error","ts":"2024-09-30T11:22:53.376080Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"warn","ts":"2024-09-30T11:22:53.396827Z","caller":"etcdserver/server.go:2139","msg":"failed to publish local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:ha-033260 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
	{"level":"warn","ts":"2024-09-30T11:22:53.406496Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1e3ede80da48fb5a","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:22:53.413780Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1e3ede80da48fb5a","rtt":"0s","error":"dial tcp 192.168.39.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:22:53.418079Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-30T11:22:53.419227Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: no route to host"}
	{"level":"info","ts":"2024-09-30T11:22:54.025613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:54.025724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:54.025738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:54.025752Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:54.025759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:55.725342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:55.725396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:55.725410Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:55.725424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to 1e3ede80da48fb5a at term 2"}
	{"level":"info","ts":"2024-09-30T11:22:55.725432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 [logterm: 2, index: 2081] sent MsgPreVote request to ff39ee5ac13ccc82 at term 2"}
	
	
	==> etcd [aa8ecc81d0af259c51958db3b384b1a62da85024faac4051a96b0fbb721e91f8] <==
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/30 11:20:36 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-30T11:20:36.370065Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:20:36.370167Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T11:20:36.370398Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"318ee90c3446d547","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-30T11:20:36.370773Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.370856Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.370907Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371048Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371109Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371173Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371265Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:20:36.371274Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371355Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371462Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371746Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371825Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371896Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.371926Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1e3ede80da48fb5a"}
	{"level":"info","ts":"2024-09-30T11:20:36.375304Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"warn","ts":"2024-09-30T11:20:36.375330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.79480238s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-30T11:20:36.375455Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-09-30T11:20:36.375483Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-033260","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	{"level":"info","ts":"2024-09-30T11:20:36.375455Z","caller":"traceutil/trace.go:171","msg":"trace[311727536] range","detail":"{range_begin:; range_end:; }","duration":"8.794947939s","start":"2024-09-30T11:20:27.580499Z","end":"2024-09-30T11:20:36.375447Z","steps":["trace[311727536] 'agreement among raft nodes before linearized reading'  (duration: 8.794800052s)"],"step_count":1}
	
	
	==> kernel <==
	 11:22:56 up 11 min,  0 users,  load average: 0.43, 0.45, 0.25
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6cf899810e1614b75c9475872ceb50ec8dcf1fed0857d7febea3992706943346] <==
	I0930 11:20:07.854375       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:07.854476       1 main.go:299] handling current node
	I0930 11:20:07.854505       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:07.854522       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:07.854768       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:07.854802       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:07.854864       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:07.854882       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:20:17.862189       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:17.862294       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:20:17.862450       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:17.862475       1 main.go:299] handling current node
	I0930 11:20:17.862496       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:17.862512       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:17.862579       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:17.862598       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:27.860296       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:20:27.860570       1 main.go:299] handling current node
	I0930 11:20:27.860675       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:20:27.860705       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:20:27.860964       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:20:27.860987       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:20:27.861050       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:20:27.861068       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	E0930 11:20:36.321163       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [d71ac252dcd804a18b7b4c849af71e521832ab94b8fda36633fb53622c52ae22] <==
	I0930 11:22:18.408919       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0930 11:22:18.411055       1 main.go:139] hostIP = 192.168.39.249
	podIP = 192.168.39.249
	I0930 11:22:18.411374       1 main.go:148] setting mtu 1500 for CNI 
	I0930 11:22:18.412734       1 main.go:178] kindnetd IP family: "ipv4"
	I0930 11:22:18.412859       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0930 11:22:19.102768       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	W0930 11:22:29.112365       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0930 11:22:29.113034       1 trace.go:236] Trace[1532943992]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (30-Sep-2024 11:22:19.102) (total time: 10009ms):
	Trace[1532943992]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10009ms (11:22:29.112)
	Trace[1532943992]: [10.009569146s] [10.009569146s] END
	E0930 11:22:29.113093       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0930 11:22:39.966399       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.15:34582->10.96.0.1:443: read: connection reset by peer
	E0930 11:22:39.966874       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 192.168.122.15:34582->10.96.0.1:443: read: connection reset by peer
	W0930 11:22:42.909485       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	E0930 11:22:42.909698       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	W0930 11:22:52.125278       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	E0930 11:22:52.125345       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3] <==
	I0930 11:22:18.351040       1 options.go:228] external host was not specified, using 192.168.39.249
	I0930 11:22:18.358932       1 server.go:142] Version: v1.31.1
	I0930 11:22:18.358991       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:22:18.951862       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 11:22:18.952819       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:22:18.958733       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 11:22:18.958775       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 11:22:18.959115       1 instance.go:232] Using reconciler: lease
	W0930 11:22:38.930134       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0930 11:22:38.930367       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0930 11:22:38.959894       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0] <==
	I0930 11:22:18.773138       1 serving.go:386] Generated self-signed cert in-memory
	I0930 11:22:19.094923       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 11:22:19.094962       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:22:19.097540       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 11:22:19.097832       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:22:19.097963       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:22:19.098124       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0930 11:22:39.965712       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.249:8443/healthz\": dial tcp 192.168.39.249:8443: connect: connection refused"
	
	
	==> kube-proxy [0a2a1de86fecaa0e664e194ad0ad82529df7064be3d1c1175f1bf7274ff61030] <==
	I0930 11:22:19.122779       1 server_linux.go:66] "Using iptables proxy"
	E0930 11:22:19.157903       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:22:19.183052       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:22:19.549933       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:22.621776       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:25.694800       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:31.839062       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0930 11:22:44.126368       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	
	
	==> kube-proxy [347597ebf9b2054ab77437e879ee112c6582729129d7f0aa43f5810028fca10c] <==
	E0930 11:19:18.749153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:18.749098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:18.749189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:21.949265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:21.949358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:25.021001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:25.021082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:25.021248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:25.021281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:28.093023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:28.093160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:33.661953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:33.662031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:36.734817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:36.734886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:36.735220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:36.735300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:19:52.094043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:19:52.094172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:01.309076       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:01.309151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1739\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:01.309250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:01.309280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1700\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0930 11:20:32.031340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703": dial tcp 192.168.39.254:8443: connect: no route to host
	E0930 11:20:32.032079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1703\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [2435a21a0f6f84c3ffd21789f2a22ae0784155abbef7de3f0078227fb0bc3fb2] <==
	E0930 11:15:14.688017       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c071322f-794b-4d6f-a33a-92077352ef5d(kube-system/kindnet-kb2cp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kb2cp"
	E0930 11:15:14.688032       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kb2cp\": pod kindnet-kb2cp is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-kb2cp"
	I0930 11:15:14.688047       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kb2cp" node="ha-033260-m04"
	E0930 11:15:14.701899       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nbts6" node="ha-033260-m04"
	E0930 11:15:14.702003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nbts6\": pod kindnet-nbts6 is already assigned to node \"ha-033260-m04\"" pod="kube-system/kindnet-nbts6"
	E0930 11:15:14.702565       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:15:14.705542       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b2de7434-03f1-4bbc-ab62-3101483908c1(kube-system/kube-proxy-cr58q) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-cr58q"
	E0930 11:15:14.705602       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cr58q\": pod kube-proxy-cr58q is already assigned to node \"ha-033260-m04\"" pod="kube-system/kube-proxy-cr58q"
	I0930 11:15:14.705671       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cr58q" node="ha-033260-m04"
	E0930 11:20:20.605503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0930 11:20:22.554431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:23.502921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0930 11:20:23.897499       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0930 11:20:25.058394       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0930 11:20:25.967971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:26.690085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:26.719785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0930 11:20:28.017485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0930 11:20:28.510153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0930 11:20:28.970490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0930 11:20:30.680510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0930 11:20:30.842087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0930 11:20:32.435288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0930 11:20:34.062903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0930 11:20:36.288434       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7ddf3925913def60fa0894fdccd80c6ab2aa0de2a5f747464fb5c51bb1840801] <==
	E0930 11:22:48.493080       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.797564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.797767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.836125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.836250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.249:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.876443       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.876506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:48.938087       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:48.938223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:49.872885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:49.873029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:50.083149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:50.083283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.249:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:55.258895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:55.258947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.249:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:55.488709       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:55.488752       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.249:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:55.537566       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:55.537697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.249:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:55.592001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:55.592041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.249:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:55.601996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:55.602042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.249:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	W0930 11:22:56.085865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.249:8443: connect: connection refused
	E0930 11:22:56.085947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.249:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.249:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125716    1307 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1765\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125576    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: I0930 11:22:44.125406    1307 status_manager.go:851] "Failed to get status for pod" podUID="4ee6f0cb154890b5d1bf6173256957d4" pod="kube-system/etcd-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:44 ha-033260 kubelet[1307]: E0930 11:22:44.125276    1307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: I0930 11:22:46.247863    1307 scope.go:117] "RemoveContainer" containerID="9ce2338b980fa463470d62456206e8abc2e5f8c6b48848126aa967298fd50db0"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: E0930 11:22:46.248393    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-033260_kube-system(43955f8cf95999657a88952585c93768)\"" pod="kube-system/kube-controller-manager-ha-033260" podUID="43955f8cf95999657a88952585c93768"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: I0930 11:22:46.864719    1307 scope.go:117] "RemoveContainer" containerID="5a83e9ce6a32aa80c5b9e9f8552503d13cfe0849cdc823320cb154b7cd4b63a3"
	Sep 30 11:22:46 ha-033260 kubelet[1307]: E0930 11:22:46.864888    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-033260_kube-system(6c1732ebd63e52d0c6ac6d9cd648cff5)\"" pod="kube-system/kube-apiserver-ha-033260" podUID="6c1732ebd63e52d0c6ac6d9cd648cff5"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: W0930 11:22:47.197121    1307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1685": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197334    1307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1685\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197299    1307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-033260.17fa018ec87563f8  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-033260,UID:6c1732ebd63e52d0c6ac6d9cd648cff5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-033260,},FirstTimestamp:2024-09-30 11:18:39.81012684 +0000 UTC m=+400.493343197,LastTimestamp:2024-09-30 11:18:39.81012684 +0000 UTC m=+400.493343197,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:
nil,ReportingController:kubelet,ReportingInstance:ha-033260,}"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: I0930 11:22:47.197407    1307 status_manager.go:851] "Failed to get status for pod" podUID="734999721cb3f48c24354599fcaf3db2" pod="kube-system/kube-scheduler-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197256    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:47 ha-033260 kubelet[1307]: W0930 11:22:47.197144    1307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1681": dial tcp 192.168.39.254:8443: connect: no route to host
	Sep 30 11:22:47 ha-033260 kubelet[1307]: E0930 11:22:47.197942    1307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-033260&resourceVersion=1681\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Sep 30 11:22:49 ha-033260 kubelet[1307]: E0930 11:22:49.688509    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695369688149390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:22:49 ha-033260 kubelet[1307]: E0930 11:22:49.688533    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695369688149390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: E0930 11:22:50.269074    1307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ha-033260\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: E0930 11:22:50.269229    1307 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	Sep 30 11:22:50 ha-033260 kubelet[1307]: I0930 11:22:50.269083    1307 status_manager.go:851] "Failed to get status for pod" podUID="6c1732ebd63e52d0c6ac6d9cd648cff5" pod="kube-system/kube-apiserver-ha-033260" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:52 ha-033260 kubelet[1307]: I0930 11:22:52.442046    1307 scope.go:117] "RemoveContainer" containerID="cb85b01ef51db041501170a79251e303c200117f93711af8d3ce9c973261dcda"
	Sep 30 11:22:52 ha-033260 kubelet[1307]: E0930 11:22:52.442202    1307 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(964381ab-f2ac-4361-a7e0-5212fff5e26e)\"" pod="kube-system/storage-provisioner" podUID="964381ab-f2ac-4361-a7e0-5212fff5e26e"
	Sep 30 11:22:53 ha-033260 kubelet[1307]: E0930 11:22:53.341056    1307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-033260?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Sep 30 11:22:53 ha-033260 kubelet[1307]: I0930 11:22:53.341059    1307 status_manager.go:851] "Failed to get status for pod" podUID="260e385d-9e17-4af8-a854-8683afb714c4" pod="kube-system/kindnet-g94k6" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-g94k6\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Sep 30 11:22:56 ha-033260 kubelet[1307]: I0930 11:22:56.413136    1307 status_manager.go:851] "Failed to get status for pod" podUID="7333717d-95d5-4990-bac9-8443a51eee97" pod="kube-system/coredns-7c65d6cfc9-5frmm" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm\": dial tcp 192.168.39.254:8443: connect: no route to host"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:22:55.360056   34067 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19734-3842/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260: exit status 2 (226.080513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-033260" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (146.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-033260 stop -v=7 --alsologtostderr: exit status 82 (2m4.740782316s)

                                                
                                                
-- stdout --
	* Stopping node "ha-033260-m04"  ...
	* Stopping node "ha-033260-m03"  ...
	* Stopping node "ha-033260-m02"  ...
	* Stopping node "ha-033260"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:22:56.956656   34138 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:22:56.956751   34138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:22:56.956758   34138 out.go:358] Setting ErrFile to fd 2...
	I0930 11:22:56.956763   34138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:22:56.956933   34138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:22:56.957130   34138 out.go:352] Setting JSON to false
	I0930 11:22:56.957201   34138 mustload.go:65] Loading cluster: ha-033260
	I0930 11:22:56.957599   34138 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:22:56.957717   34138 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:22:56.957894   34138 mustload.go:65] Loading cluster: ha-033260
	I0930 11:22:56.958023   34138 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:22:56.958051   34138 stop.go:39] StopHost: ha-033260-m04
	I0930 11:22:56.958410   34138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:56.958448   34138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:56.973243   34138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34463
	I0930 11:22:56.973928   34138 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:56.974487   34138 main.go:141] libmachine: Using API Version  1
	I0930 11:22:56.974506   34138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:56.974806   34138 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:56.976960   34138 out.go:177] * Stopping node "ha-033260-m04"  ...
	I0930 11:22:56.978734   34138 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 11:22:56.978758   34138 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:22:56.978971   34138 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 11:22:56.978995   34138 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:22:56.980483   34138 retry.go:31] will retry after 226.216577ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:22:57.206863   34138 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:22:57.208383   34138 retry.go:31] will retry after 433.620276ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:22:57.643100   34138 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:22:57.644800   34138 retry.go:31] will retry after 483.294216ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:22:58.128424   34138 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:22:58.130297   34138 retry.go:31] will retry after 757.379169ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:22:58.888215   34138 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	W0930 11:22:58.890190   34138 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:22:58.890220   34138 main.go:141] libmachine: Stopping "ha-033260-m04"...
	I0930 11:22:58.890228   34138 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:22:58.891417   34138 stop.go:66] stop err: Machine "ha-033260-m04" is already stopped.
	I0930 11:22:58.891456   34138 stop.go:69] host is already stopped
	I0930 11:22:58.891470   34138 stop.go:39] StopHost: ha-033260-m03
	I0930 11:22:58.891792   34138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:22:58.891839   34138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:22:58.907092   34138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I0930 11:22:58.907517   34138 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:22:58.907983   34138 main.go:141] libmachine: Using API Version  1
	I0930 11:22:58.908007   34138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:22:58.908380   34138 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:22:58.910663   34138 out.go:177] * Stopping node "ha-033260-m03"  ...
	I0930 11:22:58.912416   34138 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 11:22:58.912444   34138 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:22:58.912690   34138 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 11:22:58.912714   34138 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:22:58.914459   34138 retry.go:31] will retry after 353.095056ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:22:59.267922   34138 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:22:59.269759   34138 retry.go:31] will retry after 305.242265ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:22:59.576060   34138 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:22:59.577683   34138 retry.go:31] will retry after 422.288741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:23:00.000243   34138 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:23:00.001888   34138 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0930 11:23:00.001919   34138 main.go:141] libmachine: Stopping "ha-033260-m03"...
	I0930 11:23:00.001927   34138 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:23:00.002891   34138 stop.go:66] stop err: Machine "ha-033260-m03" is already stopped.
	I0930 11:23:00.002910   34138 stop.go:69] host is already stopped
	I0930 11:23:00.002922   34138 stop.go:39] StopHost: ha-033260-m02
	I0930 11:23:00.003236   34138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:23:00.003271   34138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:23:00.019253   34138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I0930 11:23:00.019788   34138 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:23:00.020264   34138 main.go:141] libmachine: Using API Version  1
	I0930 11:23:00.020287   34138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:23:00.020591   34138 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:23:00.022883   34138 out.go:177] * Stopping node "ha-033260-m02"  ...
	I0930 11:23:00.024139   34138 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 11:23:00.024161   34138 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:23:00.024380   34138 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 11:23:00.024403   34138 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:23:00.026935   34138 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:23:00.027316   34138 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:22:22 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:23:00.027345   34138 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:23:00.027465   34138 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:23:00.027640   34138 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:23:00.027796   34138 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:23:00.027924   34138 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:23:00.117344   34138 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 11:23:00.172465   34138 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	W0930 11:23:00.225832   34138 stop.go:55] failed to complete vm config backup (will continue): [failed to copy "/etc/kubernetes" to "/var/lib/minikube/backup" (will continue): sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup: Process exited with status 23
	stdout:
	
	stderr:
	rsync: [sender] link_stat "/etc/kubernetes" failed: No such file or directory (2)
	rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1336) [sender=3.2.7]
	]
	I0930 11:23:00.225868   34138 main.go:141] libmachine: Stopping "ha-033260-m02"...
	I0930 11:23:00.225878   34138 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:23:00.227410   34138 main.go:141] libmachine: (ha-033260-m02) Calling .Stop
	I0930 11:23:00.230999   34138 main.go:141] libmachine: (ha-033260-m02) Waiting for machine to stop 0/120
	I0930 11:23:01.232844   34138 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:23:01.233963   34138 main.go:141] libmachine: Machine "ha-033260-m02" was stopped.
	I0930 11:23:01.233979   34138 stop.go:75] duration metric: took 1.209841736s to stop
	I0930 11:23:01.233993   34138 stop.go:39] StopHost: ha-033260
	I0930 11:23:01.234282   34138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:23:01.234323   34138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:23:01.249126   34138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41885
	I0930 11:23:01.249588   34138 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:23:01.250069   34138 main.go:141] libmachine: Using API Version  1
	I0930 11:23:01.250090   34138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:23:01.250430   34138 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:23:01.252843   34138 out.go:177] * Stopping node "ha-033260"  ...
	I0930 11:23:01.253966   34138 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0930 11:23:01.253993   34138 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:23:01.254245   34138 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0930 11:23:01.254272   34138 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:23:01.257237   34138 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:23:01.257655   34138 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:23:01.257682   34138 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:23:01.257836   34138 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:23:01.258013   34138 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:23:01.258234   34138 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:23:01.258392   34138 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:23:01.344720   34138 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0930 11:23:01.398717   34138 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0930 11:23:01.453427   34138 main.go:141] libmachine: Stopping "ha-033260"...
	I0930 11:23:01.453455   34138 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:23:01.455088   34138 main.go:141] libmachine: (ha-033260) Calling .Stop
	I0930 11:23:01.458387   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 0/120
	I0930 11:23:02.459852   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 1/120
	I0930 11:23:03.461908   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 2/120
	I0930 11:23:04.463320   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 3/120
	I0930 11:23:05.464860   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 4/120
	I0930 11:23:06.467043   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 5/120
	I0930 11:23:07.468460   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 6/120
	I0930 11:23:08.469904   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 7/120
	I0930 11:23:09.471624   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 8/120
	I0930 11:23:10.473110   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 9/120
	I0930 11:23:11.475105   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 10/120
	I0930 11:23:12.476576   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 11/120
	I0930 11:23:13.478005   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 12/120
	I0930 11:23:14.479498   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 13/120
	I0930 11:23:15.481075   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 14/120
	I0930 11:23:16.483080   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 15/120
	I0930 11:23:17.484443   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 16/120
	I0930 11:23:18.485884   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 17/120
	I0930 11:23:19.487344   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 18/120
	I0930 11:23:20.488922   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 19/120
	I0930 11:23:21.490812   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 20/120
	I0930 11:23:22.492233   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 21/120
	I0930 11:23:23.493820   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 22/120
	I0930 11:23:24.495564   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 23/120
	I0930 11:23:25.497110   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 24/120
	I0930 11:23:26.499223   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 25/120
	I0930 11:23:27.500706   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 26/120
	I0930 11:23:28.502632   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 27/120
	I0930 11:23:29.504043   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 28/120
	I0930 11:23:30.505634   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 29/120
	I0930 11:23:31.507513   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 30/120
	I0930 11:23:32.508949   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 31/120
	I0930 11:23:33.510694   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 32/120
	I0930 11:23:34.512377   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 33/120
	I0930 11:23:35.513856   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 34/120
	I0930 11:23:36.515360   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 35/120
	I0930 11:23:37.516834   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 36/120
	I0930 11:23:38.518322   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 37/120
	I0930 11:23:39.519765   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 38/120
	I0930 11:23:40.521420   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 39/120
	I0930 11:23:41.523716   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 40/120
	I0930 11:23:42.525036   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 41/120
	I0930 11:23:43.526693   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 42/120
	I0930 11:23:44.527996   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 43/120
	I0930 11:23:45.529451   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 44/120
	I0930 11:23:46.531324   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 45/120
	I0930 11:23:47.532720   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 46/120
	I0930 11:23:48.534150   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 47/120
	I0930 11:23:49.535716   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 48/120
	I0930 11:23:50.537464   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 49/120
	I0930 11:23:51.539694   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 50/120
	I0930 11:23:52.541148   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 51/120
	I0930 11:23:53.542728   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 52/120
	I0930 11:23:54.544241   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 53/120
	I0930 11:23:55.545446   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 54/120
	I0930 11:23:56.547390   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 55/120
	I0930 11:23:57.548822   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 56/120
	I0930 11:23:58.550217   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 57/120
	I0930 11:23:59.551695   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 58/120
	I0930 11:24:00.553124   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 59/120
	I0930 11:24:01.554982   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 60/120
	I0930 11:24:02.556343   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 61/120
	I0930 11:24:03.558039   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 62/120
	I0930 11:24:04.559479   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 63/120
	I0930 11:24:05.560848   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 64/120
	I0930 11:24:06.562763   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 65/120
	I0930 11:24:07.564563   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 66/120
	I0930 11:24:08.565998   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 67/120
	I0930 11:24:09.567533   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 68/120
	I0930 11:24:10.569202   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 69/120
	I0930 11:24:11.571216   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 70/120
	I0930 11:24:12.572473   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 71/120
	I0930 11:24:13.574027   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 72/120
	I0930 11:24:14.575539   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 73/120
	I0930 11:24:15.577057   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 74/120
	I0930 11:24:16.579188   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 75/120
	I0930 11:24:17.580484   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 76/120
	I0930 11:24:18.582080   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 77/120
	I0930 11:24:19.583428   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 78/120
	I0930 11:24:20.584834   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 79/120
	I0930 11:24:21.586961   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 80/120
	I0930 11:24:22.588539   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 81/120
	I0930 11:24:23.590012   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 82/120
	I0930 11:24:24.591360   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 83/120
	I0930 11:24:25.593010   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 84/120
	I0930 11:24:26.594727   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 85/120
	I0930 11:24:27.596346   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 86/120
	I0930 11:24:28.597932   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 87/120
	I0930 11:24:29.599478   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 88/120
	I0930 11:24:30.600868   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 89/120
	I0930 11:24:31.602575   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 90/120
	I0930 11:24:32.603916   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 91/120
	I0930 11:24:33.605237   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 92/120
	I0930 11:24:34.606566   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 93/120
	I0930 11:24:35.608068   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 94/120
	I0930 11:24:36.609915   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 95/120
	I0930 11:24:37.611428   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 96/120
	I0930 11:24:38.612952   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 97/120
	I0930 11:24:39.614288   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 98/120
	I0930 11:24:40.616086   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 99/120
	I0930 11:24:41.618021   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 100/120
	I0930 11:24:42.619874   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 101/120
	I0930 11:24:43.621522   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 102/120
	I0930 11:24:44.623077   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 103/120
	I0930 11:24:45.624348   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 104/120
	I0930 11:24:46.626478   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 105/120
	I0930 11:24:47.628042   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 106/120
	I0930 11:24:48.629510   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 107/120
	I0930 11:24:49.631063   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 108/120
	I0930 11:24:50.632555   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 109/120
	I0930 11:24:51.634632   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 110/120
	I0930 11:24:52.636035   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 111/120
	I0930 11:24:53.637640   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 112/120
	I0930 11:24:54.639188   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 113/120
	I0930 11:24:55.640529   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 114/120
	I0930 11:24:56.642523   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 115/120
	I0930 11:24:57.644118   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 116/120
	I0930 11:24:58.645484   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 117/120
	I0930 11:24:59.647037   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 118/120
	I0930 11:25:00.648182   34138 main.go:141] libmachine: (ha-033260) Waiting for machine to stop 119/120
	I0930 11:25:01.649109   34138 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0930 11:25:01.649151   34138 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0930 11:25:01.651134   34138 out.go:201] 
	W0930 11:25:01.652467   34138 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0930 11:25:01.652483   34138 out.go:270] * 
	* 
	W0930 11:25:01.654703   34138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 11:25:01.656533   34138 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-033260 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
E0930 11:25:18.067212   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr: (18.439556544s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260: exit status 3 (3.167871023s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:25:23.266048   34674 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0930 11:25:23.266070   34674 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-033260" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopCluster (146.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (466.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-033260 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0930 11:30:18.064150   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:31:41.129719   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-033260 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m42.890481859s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
ha_test.go:571: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:574: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:577: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:580: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.826738429s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260 -v=7                                                           | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-033260 -v=7                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	| node    | ha-033260 node delete m03 -v=7                                                   | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-033260 stop -v=7                                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true                                                         | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:25 UTC | 30 Sep 24 11:33 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:25:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:25:23.307171   34720 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:25:23.307438   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307448   34720 out.go:358] Setting ErrFile to fd 2...
	I0930 11:25:23.307454   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307638   34720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:25:23.308189   34720 out.go:352] Setting JSON to false
	I0930 11:25:23.309088   34720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4070,"bootTime":1727691453,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:25:23.309188   34720 start.go:139] virtualization: kvm guest
	I0930 11:25:23.312163   34720 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:25:23.313387   34720 notify.go:220] Checking for updates...
	I0930 11:25:23.313393   34720 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:25:23.314778   34720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:25:23.316338   34720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:25:23.317962   34720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:25:23.319385   34720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:25:23.320813   34720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:25:23.322948   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:25:23.323340   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.323412   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.338759   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41721
	I0930 11:25:23.339192   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.339786   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.339807   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.340136   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.340346   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.340572   34720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:25:23.340857   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.340891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.355777   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
	I0930 11:25:23.356254   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.356744   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.356763   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.357120   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.357292   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.393653   34720 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:25:23.394968   34720 start.go:297] selected driver: kvm2
	I0930 11:25:23.394986   34720 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false
efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.395148   34720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:25:23.395486   34720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.395574   34720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:25:23.411100   34720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:25:23.411834   34720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:25:23.411865   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:25:23.411907   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:25:23.411964   34720 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.412098   34720 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.413851   34720 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:25:23.415381   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:25:23.415422   34720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:25:23.415429   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:25:23.415534   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:25:23.415546   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:25:23.415667   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:25:23.415859   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:25:23.415901   34720 start.go:364] duration metric: took 23.767µs to acquireMachinesLock for "ha-033260"
	I0930 11:25:23.415913   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:25:23.415920   34720 fix.go:54] fixHost starting: 
	I0930 11:25:23.416165   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.416196   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.430823   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I0930 11:25:23.431277   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.431704   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.431723   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.432018   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.432228   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.432375   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:25:23.433975   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:25:23.434007   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:25:23.436150   34720 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:25:23.437473   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:25:23.437494   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.437753   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:25:23.440392   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.440831   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:25:23.440858   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.441041   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:25:23.441214   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441380   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441502   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:25:23.441655   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:25:23.441833   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:25:23.441844   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:25:26.337999   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:29.409914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:35.489955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:38.561928   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:44.641887   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:47.713916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:53.793988   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:56.865946   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:10.017864   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:16.097850   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:19.169940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:25.249934   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:28.321888   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:34.401910   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:37.473948   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:43.553872   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:46.625911   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:52.705908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:55.777884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:01.857921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:04.929922   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:11.009956   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:14.081936   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:20.161884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:23.233917   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:29.313903   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:32.385985   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:38.465815   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:41.537920   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:47.617898   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:50.689890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:56.769908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:59.841901   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:05.921893   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:08.993941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:15.073913   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:18.145943   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:24.225916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:27.297994   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:33.377803   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:36.449892   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:42.529904   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:45.601915   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:51.681921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:54.753890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:00.833932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:03.905924   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:09.985909   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:13.057955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:19.137932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:22.209941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:28.289972   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:31.361973   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:37.441940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:40.513906   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:46.593938   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:49.665931   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:55.745914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:58.817932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:04.897939   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:07.900098   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:07.900146   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900476   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:07.900498   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900690   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:07.902604   34720 machine.go:96] duration metric: took 4m44.465113929s to provisionDockerMachine
	I0930 11:30:07.902642   34720 fix.go:56] duration metric: took 4m44.486721557s for fixHost
	I0930 11:30:07.902649   34720 start.go:83] releasing machines lock for "ha-033260", held for 4m44.486740655s
	W0930 11:30:07.902664   34720 start.go:714] error starting host: provision: host is not running
	W0930 11:30:07.902739   34720 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 11:30:07.902751   34720 start.go:729] Will try again in 5 seconds ...
	I0930 11:30:12.906532   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:12.906673   34720 start.go:364] duration metric: took 71.92µs to acquireMachinesLock for "ha-033260"
	I0930 11:30:12.906700   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:12.906710   34720 fix.go:54] fixHost starting: 
	I0930 11:30:12.906980   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:12.907012   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:12.922017   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0930 11:30:12.922407   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:12.922875   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:12.922898   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:12.923192   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:12.923373   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:12.923532   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:30:12.925123   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Stopped err=<nil>
	I0930 11:30:12.925146   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	W0930 11:30:12.925301   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:12.927074   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260" ...
	I0930 11:30:12.928250   34720 main.go:141] libmachine: (ha-033260) Calling .Start
	I0930 11:30:12.928414   34720 main.go:141] libmachine: (ha-033260) Ensuring networks are active...
	I0930 11:30:12.929185   34720 main.go:141] libmachine: (ha-033260) Ensuring network default is active
	I0930 11:30:12.929536   34720 main.go:141] libmachine: (ha-033260) Ensuring network mk-ha-033260 is active
	I0930 11:30:12.929877   34720 main.go:141] libmachine: (ha-033260) Getting domain xml...
	I0930 11:30:12.930569   34720 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:30:14.153271   34720 main.go:141] libmachine: (ha-033260) Waiting to get IP...
	I0930 11:30:14.154287   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.154676   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.154756   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.154665   35728 retry.go:31] will retry after 246.651231ms: waiting for machine to come up
	I0930 11:30:14.403231   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.403674   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.403727   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.403659   35728 retry.go:31] will retry after 262.960523ms: waiting for machine to come up
	I0930 11:30:14.668247   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.668711   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.668739   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.668675   35728 retry.go:31] will retry after 381.564783ms: waiting for machine to come up
	I0930 11:30:15.052320   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.052821   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.052846   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.052760   35728 retry.go:31] will retry after 588.393032ms: waiting for machine to come up
	I0930 11:30:15.642361   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.642772   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.642801   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.642723   35728 retry.go:31] will retry after 588.302425ms: waiting for machine to come up
	I0930 11:30:16.232721   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:16.233152   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:16.233171   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:16.233111   35728 retry.go:31] will retry after 770.742378ms: waiting for machine to come up
	I0930 11:30:17.005248   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:17.005687   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:17.005718   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:17.005645   35728 retry.go:31] will retry after 1.118737809s: waiting for machine to come up
	I0930 11:30:18.126316   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:18.126728   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:18.126755   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:18.126678   35728 retry.go:31] will retry after 1.317343847s: waiting for machine to come up
	I0930 11:30:19.446227   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:19.446785   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:19.446810   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:19.446709   35728 retry.go:31] will retry after 1.309700527s: waiting for machine to come up
	I0930 11:30:20.758241   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:20.758680   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:20.758702   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:20.758651   35728 retry.go:31] will retry after 1.521862953s: waiting for machine to come up
	I0930 11:30:22.282731   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:22.283205   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:22.283242   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:22.283159   35728 retry.go:31] will retry after 2.906878377s: waiting for machine to come up
	I0930 11:30:25.192687   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:25.193133   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:25.193170   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:25.193111   35728 retry.go:31] will retry after 2.807596314s: waiting for machine to come up
	I0930 11:30:28.002489   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:28.002972   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:28.003005   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:28.002951   35728 retry.go:31] will retry after 2.762675727s: waiting for machine to come up
	I0930 11:30:30.769002   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.769600   34720 main.go:141] libmachine: (ha-033260) Found IP for machine: 192.168.39.249
	I0930 11:30:30.769647   34720 main.go:141] libmachine: (ha-033260) Reserving static IP address...
	I0930 11:30:30.769660   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has current primary IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.770061   34720 main.go:141] libmachine: (ha-033260) Reserved static IP address: 192.168.39.249
	I0930 11:30:30.770097   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.770113   34720 main.go:141] libmachine: (ha-033260) Waiting for SSH to be available...
	I0930 11:30:30.770138   34720 main.go:141] libmachine: (ha-033260) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"}
	I0930 11:30:30.770150   34720 main.go:141] libmachine: (ha-033260) DBG | Getting to WaitForSSH function...
	I0930 11:30:30.772370   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.772760   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772873   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH client type: external
	I0930 11:30:30.772897   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa (-rw-------)
	I0930 11:30:30.772957   34720 main.go:141] libmachine: (ha-033260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:30.772978   34720 main.go:141] libmachine: (ha-033260) DBG | About to run SSH command:
	I0930 11:30:30.772991   34720 main.go:141] libmachine: (ha-033260) DBG | exit 0
	I0930 11:30:30.902261   34720 main.go:141] libmachine: (ha-033260) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:30.902682   34720 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:30:30.903345   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:30.905986   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906435   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.906466   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906792   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:30.907003   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:30.907027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:30.907234   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:30.909474   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.909877   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.909908   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.910031   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:30.910192   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910303   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910430   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:30.910552   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:30.910754   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:30.910767   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:31.026522   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:31.026555   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.026772   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:31.026799   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.027007   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.029600   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.029965   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.029992   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.030147   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.030327   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030457   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030592   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.030726   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.030900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.030913   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:30:31.158417   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:30:31.158470   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.161439   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.161861   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.161898   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.162135   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.162317   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162476   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162595   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.162742   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.162897   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.162912   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:31.283806   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:31.283837   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:31.283864   34720 buildroot.go:174] setting up certificates
	I0930 11:30:31.283877   34720 provision.go:84] configureAuth start
	I0930 11:30:31.283888   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.284156   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:31.287095   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287561   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.287586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287860   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.290260   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290610   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.290638   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290768   34720 provision.go:143] copyHostCerts
	I0930 11:30:31.290802   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290847   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:31.290855   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290923   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:31.291012   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291029   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:31.291036   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291062   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:31.291116   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291138   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:31.291144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291169   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:31.291235   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:30:31.357378   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:31.357434   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:31.357461   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.360265   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360612   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.360639   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360895   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.361087   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.361219   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.361344   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.448948   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:31.449019   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:31.478937   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:31.479012   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:30:31.509585   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:31.509668   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:31.539539   34720 provision.go:87] duration metric: took 255.649967ms to configureAuth
	I0930 11:30:31.539565   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:31.539759   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:31.539826   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.542626   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543038   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.543072   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543261   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.543501   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543644   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543761   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.543949   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.544136   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.544151   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:31.800600   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:31.800624   34720 machine.go:96] duration metric: took 893.601125ms to provisionDockerMachine
	I0930 11:30:31.800638   34720 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:30:31.800650   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:31.800670   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.801007   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:31.801030   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.803813   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804193   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.804222   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804441   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.804604   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.804769   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.804939   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.893164   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:31.898324   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:31.898349   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:31.898488   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:31.898642   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:31.898657   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:31.898771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:31.909611   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:31.940213   34720 start.go:296] duration metric: took 139.562436ms for postStartSetup
	I0930 11:30:31.940253   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.940567   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:31.940600   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.943464   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.943880   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.943909   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.944048   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.944346   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.944569   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.944768   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.028986   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:32.029069   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:32.087362   34720 fix.go:56] duration metric: took 19.180639105s for fixHost
	I0930 11:30:32.087405   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.090539   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.090962   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.090988   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.091151   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.091371   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091585   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091707   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.091851   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:32.092025   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:32.092044   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:32.206950   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695832.171402259
	
	I0930 11:30:32.206975   34720 fix.go:216] guest clock: 1727695832.171402259
	I0930 11:30:32.206982   34720 fix.go:229] Guest: 2024-09-30 11:30:32.171402259 +0000 UTC Remote: 2024-09-30 11:30:32.087388641 +0000 UTC m=+308.814519334 (delta=84.013618ms)
	I0930 11:30:32.207008   34720 fix.go:200] guest clock delta is within tolerance: 84.013618ms
	I0930 11:30:32.207014   34720 start.go:83] releasing machines lock for "ha-033260", held for 19.300329364s
	I0930 11:30:32.207037   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.207322   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:32.209968   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210394   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.210419   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210638   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211106   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211267   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211375   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:32.211419   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.211462   34720 ssh_runner.go:195] Run: cat /version.json
	I0930 11:30:32.211487   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.213826   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214176   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214200   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214221   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214463   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.214607   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.214713   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.214734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214757   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214877   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.214902   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.215061   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.215198   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.215320   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.318873   34720 ssh_runner.go:195] Run: systemctl --version
	I0930 11:30:32.325516   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:32.483433   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:32.489924   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:32.489999   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:32.509691   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:32.509716   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:32.509773   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:32.529220   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:32.544880   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:32.544953   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:32.561347   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:32.576185   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:32.696192   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:32.856000   34720 docker.go:233] disabling docker service ...
	I0930 11:30:32.856061   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:32.872115   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:32.886462   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:33.019718   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:33.149810   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:33.165943   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:33.188911   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:33.188984   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.202121   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:33.202191   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.214960   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.227336   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.239366   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:33.251818   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.264121   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.285246   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.297242   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:30:33.307951   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:30:33.308020   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:30:33.324031   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:30:33.335459   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:33.464418   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:30:33.563219   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:30:33.563313   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:30:33.568915   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:30:33.568982   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:30:33.575600   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:30:33.617027   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:30:33.617123   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.651093   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.682607   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:30:33.684108   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:33.687198   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687568   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:33.687586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687860   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:30:33.692422   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:33.706358   34720 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:30:33.706513   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:33.706553   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:33.741648   34720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 11:30:33.741712   34720 ssh_runner.go:195] Run: which lz4
	I0930 11:30:33.746514   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 11:30:33.746605   34720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:30:33.751033   34720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:30:33.751094   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 11:30:35.211096   34720 crio.go:462] duration metric: took 1.464517464s to copy over tarball
	I0930 11:30:35.211178   34720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:30:37.290495   34720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.079293521s)
	I0930 11:30:37.290519   34720 crio.go:469] duration metric: took 2.079396835s to extract the tarball
	I0930 11:30:37.290526   34720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:30:37.328103   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:37.375779   34720 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:30:37.375803   34720 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:30:37.375810   34720 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:30:37.375919   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:30:37.376009   34720 ssh_runner.go:195] Run: crio config
	I0930 11:30:37.430483   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:30:37.430505   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:30:37.430513   34720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:30:37.430534   34720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:30:37.430658   34720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:30:37.430678   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:30:37.430719   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:30:37.447824   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:30:37.447927   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:30:37.447977   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:30:37.458530   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:30:37.458608   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:30:37.469126   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:30:37.487666   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:30:37.505980   34720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:30:37.524942   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:30:37.543099   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:30:37.547174   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:37.560565   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:37.703633   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:30:37.722433   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:30:37.722455   34720 certs.go:194] generating shared ca certs ...
	I0930 11:30:37.722471   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:37.722631   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:30:37.722669   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:30:37.722678   34720 certs.go:256] generating profile certs ...
	I0930 11:30:37.722756   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:30:37.722813   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:30:37.722850   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:30:37.722861   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:30:37.722873   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:30:37.722886   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:30:37.722898   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:30:37.722909   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:30:37.722931   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:30:37.722944   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:30:37.722956   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:30:37.723015   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:30:37.723047   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:30:37.723058   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:30:37.723082   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:30:37.723107   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:30:37.723127   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:30:37.723167   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:37.723194   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:37.723207   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:30:37.723219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:30:37.723778   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:30:37.765086   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:30:37.796973   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:30:37.825059   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:30:37.855521   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:30:37.899131   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:30:37.930900   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:30:37.980558   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:30:38.038804   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:30:38.087704   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:30:38.115070   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:30:38.143055   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:30:38.165228   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:30:38.181120   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:30:38.193472   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199554   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199622   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.206544   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:30:38.218674   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:30:38.230696   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235800   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235869   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.242027   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:30:38.253962   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:30:38.265695   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270860   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270930   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.277134   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:30:38.288946   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:30:38.294078   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:30:38.300823   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:30:38.307442   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:30:38.314085   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:30:38.320482   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:30:38.327174   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:30:38.333995   34720 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:30:38.334150   34720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:30:38.334251   34720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:30:38.372351   34720 cri.go:89] found id: ""
	I0930 11:30:38.372413   34720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:30:38.383026   34720 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 11:30:38.383043   34720 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 11:30:38.383100   34720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 11:30:38.394015   34720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:30:38.394528   34720 kubeconfig.go:125] found "ha-033260" server: "https://192.168.39.254:8443"
	I0930 11:30:38.394558   34720 kubeconfig.go:47] verify endpoint returned: got: 192.168.39.254:8443, want: 192.168.39.249:8443
	I0930 11:30:38.394772   34720 kubeconfig.go:62] /home/jenkins/minikube-integration/19734-3842/kubeconfig needs updating (will repair): [kubeconfig needs server address update]
	I0930 11:30:38.395022   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.395487   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.395704   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:30:38.396149   34720 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 11:30:38.396377   34720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 11:30:38.407784   34720 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0930 11:30:38.407813   34720 kubeadm.go:597] duration metric: took 24.764144ms to restartPrimaryControlPlane
	I0930 11:30:38.407821   34720 kubeadm.go:394] duration metric: took 73.840194ms to StartCluster
	I0930 11:30:38.407838   34720 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.407924   34720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.408750   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.409039   34720 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:30:38.409099   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:30:38.409119   34720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:30:38.409305   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.411175   34720 out.go:177] * Enabled addons: 
	I0930 11:30:38.412776   34720 addons.go:510] duration metric: took 3.663147ms for enable addons: enabled=[]
	I0930 11:30:38.412820   34720 start.go:246] waiting for cluster config update ...
	I0930 11:30:38.412828   34720 start.go:255] writing updated cluster config ...
	I0930 11:30:38.414670   34720 out.go:201] 
	I0930 11:30:38.416408   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.416501   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.418474   34720 out.go:177] * Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	I0930 11:30:38.419875   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:38.419902   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:30:38.420019   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:30:38.420031   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:30:38.420138   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.420331   34720 start.go:360] acquireMachinesLock for ha-033260-m02: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:38.420373   34720 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "ha-033260-m02"
	I0930 11:30:38.420384   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:38.420389   34720 fix.go:54] fixHost starting: m02
	I0930 11:30:38.420682   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:38.420704   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:38.436048   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0930 11:30:38.436591   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:38.437106   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:38.437129   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:38.437434   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:38.437608   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:38.437762   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:30:38.439609   34720 fix.go:112] recreateIfNeeded on ha-033260-m02: state=Stopped err=<nil>
	I0930 11:30:38.439637   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	W0930 11:30:38.439785   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:38.443504   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m02" ...
	I0930 11:30:38.445135   34720 main.go:141] libmachine: (ha-033260-m02) Calling .Start
	I0930 11:30:38.445476   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring networks are active...
	I0930 11:30:38.446588   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network default is active
	I0930 11:30:38.447039   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network mk-ha-033260 is active
	I0930 11:30:38.447376   34720 main.go:141] libmachine: (ha-033260-m02) Getting domain xml...
	I0930 11:30:38.448426   34720 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:30:39.710879   34720 main.go:141] libmachine: (ha-033260-m02) Waiting to get IP...
	I0930 11:30:39.711874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.712365   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.712441   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.712367   35943 retry.go:31] will retry after 217.001727ms: waiting for machine to come up
	I0930 11:30:39.931176   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.931746   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.931795   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.931690   35943 retry.go:31] will retry after 360.379717ms: waiting for machine to come up
	I0930 11:30:40.293305   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.293927   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.293956   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.293884   35943 retry.go:31] will retry after 440.189289ms: waiting for machine to come up
	I0930 11:30:40.735666   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.736141   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.736171   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.736077   35943 retry.go:31] will retry after 458.690004ms: waiting for machine to come up
	I0930 11:30:41.196951   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.197392   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.197421   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.197336   35943 retry.go:31] will retry after 554.052986ms: waiting for machine to come up
	I0930 11:30:41.753199   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.753680   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.753707   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.753643   35943 retry.go:31] will retry after 931.699083ms: waiting for machine to come up
	I0930 11:30:42.686931   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:42.687320   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:42.687351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:42.687256   35943 retry.go:31] will retry after 1.166098452s: waiting for machine to come up
	I0930 11:30:43.855595   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:43.856179   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:43.856196   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:43.856132   35943 retry.go:31] will retry after 902.212274ms: waiting for machine to come up
	I0930 11:30:44.759588   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:44.760139   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:44.760169   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:44.760094   35943 retry.go:31] will retry after 1.732785907s: waiting for machine to come up
	I0930 11:30:46.495220   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:46.495722   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:46.495751   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:46.495670   35943 retry.go:31] will retry after 1.455893126s: waiting for machine to come up
	I0930 11:30:47.952835   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:47.953164   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:47.953186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:47.953117   35943 retry.go:31] will retry after 1.846394006s: waiting for machine to come up
	I0930 11:30:49.801836   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:49.802224   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:49.802255   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:49.802148   35943 retry.go:31] will retry after 3.334677314s: waiting for machine to come up
	I0930 11:30:53.140758   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:53.141162   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:53.141198   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:53.141142   35943 retry.go:31] will retry after 4.392553354s: waiting for machine to come up
	I0930 11:30:57.535667   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536094   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536115   34720 main.go:141] libmachine: (ha-033260-m02) Found IP for machine: 192.168.39.3
	I0930 11:30:57.536128   34720 main.go:141] libmachine: (ha-033260-m02) Reserving static IP address...
	I0930 11:30:57.536667   34720 main.go:141] libmachine: (ha-033260-m02) Reserved static IP address: 192.168.39.3
	I0930 11:30:57.536690   34720 main.go:141] libmachine: (ha-033260-m02) Waiting for SSH to be available...
	I0930 11:30:57.536702   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.536717   34720 main.go:141] libmachine: (ha-033260-m02) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"}
	I0930 11:30:57.536733   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Getting to WaitForSSH function...
	I0930 11:30:57.538801   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539092   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.539118   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539287   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH client type: external
	I0930 11:30:57.539307   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa (-rw-------)
	I0930 11:30:57.539337   34720 main.go:141] libmachine: (ha-033260-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:57.539351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | About to run SSH command:
	I0930 11:30:57.539361   34720 main.go:141] libmachine: (ha-033260-m02) DBG | exit 0
	I0930 11:30:57.665932   34720 main.go:141] libmachine: (ha-033260-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:57.666273   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:30:57.666869   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:57.669186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669581   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.669611   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669933   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:57.670195   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:57.670214   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:57.670410   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.672489   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.672840   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.672867   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.673009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.673202   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673389   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673514   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.673661   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.673838   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.673848   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:57.786110   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:57.786133   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786377   34720 buildroot.go:166] provisioning hostname "ha-033260-m02"
	I0930 11:30:57.786400   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786574   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.789039   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789439   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.789465   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789633   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.789794   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.789948   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.790053   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.790195   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.790374   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.790385   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m02 && echo "ha-033260-m02" | sudo tee /etc/hostname
	I0930 11:30:57.917415   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m02
	
	I0930 11:30:57.917438   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.920154   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920496   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.920529   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920721   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.920892   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921046   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921171   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.921311   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.921493   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.921509   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:58.045391   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:58.045417   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:58.045437   34720 buildroot.go:174] setting up certificates
	I0930 11:30:58.045462   34720 provision.go:84] configureAuth start
	I0930 11:30:58.045479   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:58.045758   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.048321   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048721   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.048743   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048920   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.051229   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051564   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.051591   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051758   34720 provision.go:143] copyHostCerts
	I0930 11:30:58.051783   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051822   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:58.051830   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051885   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:58.051973   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.051994   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:58.051999   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.052023   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:58.052120   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052140   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:58.052144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:58.052236   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m02 san=[127.0.0.1 192.168.39.3 ha-033260-m02 localhost minikube]
	I0930 11:30:58.137309   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:58.137363   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:58.137388   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.139915   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140158   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.140185   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140386   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.140552   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.140695   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.140798   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.228976   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:58.229076   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:58.254635   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:58.254717   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:30:58.279904   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:58.279982   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:58.305451   34720 provision.go:87] duration metric: took 259.975115ms to configureAuth
	I0930 11:30:58.305480   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:58.305758   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:58.305834   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.308335   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.308803   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.308825   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.309009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.309198   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309332   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309439   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.309633   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.309804   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.309818   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:58.549247   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:58.549271   34720 machine.go:96] duration metric: took 879.062425ms to provisionDockerMachine
	I0930 11:30:58.549282   34720 start.go:293] postStartSetup for "ha-033260-m02" (driver="kvm2")
	I0930 11:30:58.549291   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:58.549307   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.549711   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:58.549753   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.552476   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.552924   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.552952   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.553077   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.553265   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.553440   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.553591   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.641113   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:58.645683   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:58.645710   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:58.645780   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:58.645871   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:58.645881   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:58.645976   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:58.656118   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:58.683428   34720 start.go:296] duration metric: took 134.134961ms for postStartSetup
	I0930 11:30:58.683471   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.683772   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:58.683796   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.686150   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686552   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.686580   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686712   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.686921   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.687033   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.687137   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.772957   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:58.773054   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:58.831207   34720 fix.go:56] duration metric: took 20.410809297s for fixHost
	I0930 11:30:58.831256   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.834153   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834531   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.834561   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834754   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.834963   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835129   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835280   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.835497   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.835715   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.835747   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:58.950852   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695858.923209005
	
	I0930 11:30:58.950874   34720 fix.go:216] guest clock: 1727695858.923209005
	I0930 11:30:58.950882   34720 fix.go:229] Guest: 2024-09-30 11:30:58.923209005 +0000 UTC Remote: 2024-09-30 11:30:58.831234705 +0000 UTC m=+335.558365405 (delta=91.9743ms)
	I0930 11:30:58.950897   34720 fix.go:200] guest clock delta is within tolerance: 91.9743ms
	I0930 11:30:58.950902   34720 start.go:83] releasing machines lock for "ha-033260-m02", held for 20.530522823s
	I0930 11:30:58.950922   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.951203   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.954037   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.954470   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.954495   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.956428   34720 out.go:177] * Found network options:
	I0930 11:30:58.958147   34720 out.go:177]   - NO_PROXY=192.168.39.249
	W0930 11:30:58.959662   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.959685   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960216   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960383   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960470   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:58.960516   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	W0930 11:30:58.960557   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.960638   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:58.960661   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.963506   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963693   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.963901   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964044   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964186   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964190   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.964217   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964364   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964379   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964505   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.964524   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964643   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964756   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:59.185932   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:59.192578   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:59.192645   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:59.212639   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:59.212663   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:59.212730   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:59.233596   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:59.248239   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:59.248310   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:59.262501   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:59.277031   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:59.408627   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:59.575087   34720 docker.go:233] disabling docker service ...
	I0930 11:30:59.575157   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:59.590510   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:59.605151   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:59.739478   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:59.876906   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:59.891632   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:59.911543   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:59.911601   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.923050   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:59.923114   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.934427   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.945682   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.957111   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:59.968813   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.980975   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.999767   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:31:00.011463   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:31:00.021740   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:31:00.021804   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:31:00.036575   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:31:00.046724   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:00.166031   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:31:00.263048   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:31:00.263104   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:31:00.268250   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:31:00.268319   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:31:00.272426   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:31:00.321494   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:31:00.321561   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.350506   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.381505   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:31:00.383057   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:31:00.384433   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:31:00.387430   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.387871   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:31:00.387903   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.388092   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:31:00.392819   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:00.406199   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:31:00.406474   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:00.406842   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.406891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.421565   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0930 11:31:00.422022   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.422477   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.422501   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.422814   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.423031   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:31:00.424747   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:31:00.425025   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.425059   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.439760   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0930 11:31:00.440237   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.440699   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.440716   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.441029   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.441215   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:31:00.441357   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.3
	I0930 11:31:00.441367   34720 certs.go:194] generating shared ca certs ...
	I0930 11:31:00.441380   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.441501   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:31:00.441541   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:31:00.441555   34720 certs.go:256] generating profile certs ...
	I0930 11:31:00.441653   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:31:00.441679   34720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173
	I0930 11:31:00.441696   34720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:31:00.711479   34720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 ...
	I0930 11:31:00.711512   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173: {Name:mk8969b2efcc5de06d527c6abe25d7f8f8bfba86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711706   34720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 ...
	I0930 11:31:00.711723   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173: {Name:mkcb971c29eb187169c6672af3a12c14dd523134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711815   34720 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:31:00.711977   34720 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:31:00.712110   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:31:00.712126   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:31:00.712141   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:31:00.712175   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:31:00.712192   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:31:00.712204   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:31:00.712217   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:31:00.712228   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:31:00.712238   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:31:00.712287   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:31:00.712314   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:31:00.712324   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:31:00.712348   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:31:00.712369   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:31:00.712408   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:31:00.712446   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:31:00.712473   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:31:00.712487   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:00.712499   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:31:00.712528   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:31:00.715756   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716154   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:31:00.716181   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716374   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:31:00.716558   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:31:00.716720   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:31:00.716893   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:31:00.794084   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:31:00.799675   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:31:00.812361   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:31:00.817141   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:31:00.828855   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:31:00.833566   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:31:00.844934   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:31:00.849462   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:31:00.860080   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:31:00.864183   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:31:00.875695   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:31:00.880202   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:31:00.891130   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:31:00.918693   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:31:00.944303   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:31:00.969526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:31:00.996710   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:31:01.023015   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:31:01.050381   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:31:01.076757   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:31:01.103526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:31:01.129114   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:31:01.155177   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:31:01.180954   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:31:01.199391   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:31:01.218184   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:31:01.238266   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:31:01.258183   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:31:01.276632   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:31:01.294303   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:31:01.312244   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:31:01.318735   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:31:01.330839   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.335928   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.336000   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.342463   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:31:01.353941   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:31:01.365658   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370653   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370714   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.376795   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:31:01.388155   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:31:01.399831   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404901   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404967   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.411138   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:31:01.422294   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:31:01.426988   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:31:01.433816   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:31:01.440682   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:31:01.447200   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:31:01.454055   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:31:01.460508   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:31:01.466735   34720 kubeadm.go:934] updating node {m02 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 11:31:01.466882   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:31:01.466926   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:31:01.466986   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:31:01.485425   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:31:01.485500   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:31:01.485555   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:31:01.495844   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:31:01.495903   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:31:01.505526   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0930 11:31:01.523077   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:31:01.540915   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:31:01.558204   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:31:01.562410   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:01.575484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.701502   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.719655   34720 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:31:01.719937   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:01.723162   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:31:01.724484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.910906   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.933340   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:31:01.933718   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:31:01.933803   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:31:01.934081   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:01.934248   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:01.934259   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:01.934274   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:01.934285   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:06.735523   34720 round_trippers.go:574] Response Status:  in 4801 milliseconds
	I0930 11:31:07.735873   34720 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735937   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735944   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:07.735954   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:07.735960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:17.737130   34720 round_trippers.go:574] Response Status:  in 10001 milliseconds
	I0930 11:31:17.737228   34720 node_ready.go:53] error getting node "ha-033260-m02": Get "https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.39.1:51024->192.168.39.249:8443: read: connection reset by peer
	I0930 11:31:17.737312   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:17.737324   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:17.737335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:17.737343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.500223   34720 round_trippers.go:574] Response Status: 200 OK in 3762 milliseconds
	I0930 11:31:21.501292   34720 node_ready.go:53] node "ha-033260-m02" has status "Ready":"Unknown"
	I0930 11:31:21.501373   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.501386   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.501397   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.501404   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.519310   34720 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0930 11:31:21.934926   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.934946   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.934956   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.934960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.940164   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:22.434503   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.434527   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.434544   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.434553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.438661   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:22.934869   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.934914   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.934923   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.934927   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.937891   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.435280   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.435301   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.435309   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.435314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.441790   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.444141   34720 node_ready.go:49] node "ha-033260-m02" has status "Ready":"True"
	I0930 11:31:23.444180   34720 node_ready.go:38] duration metric: took 21.510052339s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:23.444195   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:23.444252   34720 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:31:23.444273   34720 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:31:23.444364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:23.444380   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.444392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.444401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.454505   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:23.465935   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.466047   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:31:23.466061   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.466072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.466081   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.474857   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:23.475614   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.475635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.475647   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.475654   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.478510   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.479069   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.479097   34720 pod_ready.go:82] duration metric: took 13.131126ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479109   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479186   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:31:23.479199   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.479208   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.479213   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.485985   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.486909   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.486931   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.486941   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.486947   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490284   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.490832   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.490853   34720 pod_ready.go:82] duration metric: took 11.73655ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490864   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:31:23.490962   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.490972   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.498681   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:23.499421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.499443   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.499460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.499466   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.503369   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.503948   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.503974   34720 pod_ready.go:82] duration metric: took 13.102363ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.503986   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.504068   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:23.504080   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.504090   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.504097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.510528   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.511092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.511107   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.511115   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.511122   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.515703   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:24.004536   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.004560   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.004580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.004588   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.008341   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.009009   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.009023   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.009030   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.009038   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.011924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:24.504942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.504982   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.504991   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.504996   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.508600   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.509408   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.509428   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.509437   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.509441   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.512140   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.005082   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.005104   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.005112   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.005115   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.008608   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:25.009145   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.009159   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.009166   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.009172   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.012052   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.505333   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.505422   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.505445   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.505470   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.544680   34720 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0930 11:31:25.545744   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.545758   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.545766   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.545771   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.559955   34720 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0930 11:31:25.560548   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:26.004848   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.004869   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.004877   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.004881   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.008562   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.009380   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.009397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.009407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.009413   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.012491   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.504290   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.504315   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.504327   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.504335   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.508059   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.508795   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.508813   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.508823   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.508828   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.512273   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.004525   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.004546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.004555   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.004560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009158   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:27.009942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.009959   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.009967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.013093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.505035   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.505082   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.505093   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.505100   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.508864   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.509652   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.509670   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.509681   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.509687   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.512440   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:28.005011   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.005040   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.005051   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.005058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.013343   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:28.014728   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.014745   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.014754   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.014758   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.036177   34720 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0930 11:31:28.037424   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:28.504206   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.504241   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.504249   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.504254   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.511361   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:28.512356   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.512373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.512383   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.512389   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.525172   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:31:29.005163   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.005184   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.005195   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.005200   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.010684   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.011486   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.011516   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.011528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.011535   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.017470   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.505132   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.505152   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.505162   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.505168   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.518955   34720 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0930 11:31:29.519584   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.519602   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.519612   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.519619   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.530475   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:30.004860   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.004881   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.004889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.004893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.008564   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.009192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.009207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.009215   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.009220   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.013399   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.504171   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.504195   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.504205   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.504210   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.507972   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.509257   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.509275   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.509283   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.509286   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.513975   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.514510   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:31.004737   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.004765   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.004775   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.004780   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010196   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:31.010880   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.010900   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.010912   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010919   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.014567   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:31.504379   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.504397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.504405   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.504409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.511899   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:31.513088   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.513111   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.513122   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.513128   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.516398   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.005079   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.005119   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.005131   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.005138   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.009300   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:32.010097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.010118   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.010130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.010137   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.013237   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.505168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.505192   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.505203   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.505209   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.509155   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.509935   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.509953   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.509960   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.509964   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.513296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:33.004767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.004802   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.004812   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.004818   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.009316   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:33.009983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.009997   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.010005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.010018   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.012955   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:33.013498   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:33.504397   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.504432   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.504443   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.504450   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.620464   34720 round_trippers.go:574] Response Status: 200 OK in 115 milliseconds
	I0930 11:31:33.621445   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.621467   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.621479   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.621486   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.624318   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.004311   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:34.004332   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.004341   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.004346   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.008601   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.009530   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.009546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.009553   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.009556   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.013047   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.013767   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.013788   34720 pod_ready.go:82] duration metric: took 10.509794387s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013800   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013877   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:31:34.013888   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.013899   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.013908   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.021427   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:34.022374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.022393   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.022405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.022412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.026491   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.027124   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.027154   34720 pod_ready.go:82] duration metric: took 13.341195ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027184   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027276   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:31:34.027289   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.027300   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.027306   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.031483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.032050   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.032064   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.032072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.032075   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.035296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.035760   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.035779   34720 pod_ready.go:82] duration metric: took 8.586877ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035787   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035853   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:31:34.035863   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.035870   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.035874   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.040970   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.041904   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.041918   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.041926   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.041929   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.046986   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.047525   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.047542   34720 pod_ready.go:82] duration metric: took 11.747596ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047550   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047603   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:31:34.047611   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.047617   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.047621   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.053430   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.054003   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.054018   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.054025   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.054029   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.056888   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.057338   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.057358   34720 pod_ready.go:82] duration metric: took 9.802193ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.057367   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.204770   34720 request.go:632] Waited for 147.330113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204839   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204844   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.204851   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.204860   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.209352   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.404334   34720 request.go:632] Waited for 194.306843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404424   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.404441   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.404444   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.408185   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.605268   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.605293   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.605306   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.605311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.608441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.804521   34720 request.go:632] Waited for 195.318558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804587   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804592   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.804600   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.804607   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.808658   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.058569   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.058598   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.058609   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.058614   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.062153   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.204479   34720 request.go:632] Waited for 141.249746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204567   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204575   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.204586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.204594   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.209332   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.558083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.558103   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.558111   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.558116   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.562046   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.605131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.605167   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.605179   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.605184   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.616080   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:36.058179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:36.058207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.058218   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.058236   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.062566   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:36.063353   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:36.063373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.063384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.063390   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.066635   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.067352   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.067373   34720 pod_ready.go:82] duration metric: took 2.009999965s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.067382   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.204802   34720 request.go:632] Waited for 137.362306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204868   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204890   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.204901   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.204907   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.208231   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.404396   34720 request.go:632] Waited for 195.331717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404465   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.404473   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.404477   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.408489   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.409278   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.409299   34720 pod_ready.go:82] duration metric: took 341.910503ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.409308   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.604639   34720 request.go:632] Waited for 195.258772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604699   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604706   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.604716   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.604721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.608453   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.804560   34720 request.go:632] Waited for 195.30805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804622   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.804645   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.804651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.808127   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.808836   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.808857   34720 pod_ready.go:82] duration metric: took 399.543561ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.808867   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.004923   34720 request.go:632] Waited for 195.985958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004973   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004978   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.004985   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.004989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.008223   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.205282   34720 request.go:632] Waited for 196.371879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205357   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205362   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.205369   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.205374   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.208700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.209207   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.209239   34720 pod_ready.go:82] duration metric: took 400.365138ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.209250   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.405282   34720 request.go:632] Waited for 195.959121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405389   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405398   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.405409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.405429   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.409314   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.605347   34720 request.go:632] Waited for 195.282379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.605450   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.605459   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.608764   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.609479   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.609498   34720 pod_ready.go:82] duration metric: took 400.240233ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.609507   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.804579   34720 request.go:632] Waited for 195.010584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804657   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804664   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.804671   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.804675   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.808363   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.005248   34720 request.go:632] Waited for 196.304263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005314   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005321   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.005330   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.005333   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.009635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:38.010535   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.010557   34720 pod_ready.go:82] duration metric: took 401.042919ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.010566   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.204595   34720 request.go:632] Waited for 193.96721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204665   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204677   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.204689   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.204696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.208393   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.404559   34720 request.go:632] Waited for 195.429784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404620   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.404641   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.404646   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.408057   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.408674   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.408694   34720 pod_ready.go:82] duration metric: took 398.12275ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.408703   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.605374   34720 request.go:632] Waited for 196.589593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605437   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.605444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.605449   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.609411   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.804516   34720 request.go:632] Waited for 194.287587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804579   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.804586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.804589   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.808043   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.808604   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.808623   34720 pod_ready.go:82] duration metric: took 399.91394ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.808637   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.004815   34720 request.go:632] Waited for 196.10639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004881   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004887   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.004895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.004900   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.008293   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.204330   34720 request.go:632] Waited for 195.292523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204402   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204410   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.204419   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.204428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.208212   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.208803   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.208826   34720 pod_ready.go:82] duration metric: took 400.181261ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.208843   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.404860   34720 request.go:632] Waited for 195.933233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404913   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404919   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.404926   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.404931   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.408874   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.604903   34720 request.go:632] Waited for 195.413864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604970   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604975   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.604983   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.604987   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.608209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.608764   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.608784   34720 pod_ready.go:82] duration metric: took 399.933732ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.608794   34720 pod_ready.go:39] duration metric: took 16.164585673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:39.608807   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:31:39.608855   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:31:39.626199   34720 api_server.go:72] duration metric: took 37.906495975s to wait for apiserver process to appear ...
	I0930 11:31:39.626228   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:31:39.626249   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:31:39.630779   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:31:39.630856   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:31:39.630864   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.630872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.630879   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.631851   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:31:39.631971   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:31:39.631987   34720 api_server.go:131] duration metric: took 5.751654ms to wait for apiserver health ...
	I0930 11:31:39.631994   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:31:39.805247   34720 request.go:632] Waited for 173.189912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805322   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805328   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.805335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.805339   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.811658   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:39.818704   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:31:39.818737   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818745   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818751   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:39.818754   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:39.818758   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:39.818761   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:39.818766   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:39.818769   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:39.818772   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:39.818777   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:39.818781   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:39.818787   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:39.818792   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:39.818797   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:39.818803   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:39.818809   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:39.818814   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:39.818820   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:39.818828   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:39.818834   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:39.818840   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:39.818843   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:39.818846   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:39.818852   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:39.818855   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:39.818858   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:39.818864   34720 system_pods.go:74] duration metric: took 186.864889ms to wait for pod list to return data ...
	I0930 11:31:39.818873   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:31:40.005326   34720 request.go:632] Waited for 186.370068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005389   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.005396   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.005401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.009301   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.009537   34720 default_sa.go:45] found service account: "default"
	I0930 11:31:40.009555   34720 default_sa.go:55] duration metric: took 190.676192ms for default service account to be created ...
	I0930 11:31:40.009564   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:31:40.205063   34720 request.go:632] Waited for 195.430952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205139   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.205147   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.205150   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.210696   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:40.219002   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:31:40.219052   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219065   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219074   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:40.219081   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:40.219086   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:40.219092   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:40.219097   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:40.219103   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:40.219108   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:40.219115   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:40.219123   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:40.219130   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:40.219137   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:40.219145   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:40.219149   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:40.219155   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:40.219158   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:40.219162   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:40.219168   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:40.219171   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:40.219177   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:40.219181   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:40.219186   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:40.219190   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:40.219193   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:40.219196   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:40.219204   34720 system_pods.go:126] duration metric: took 209.632746ms to wait for k8s-apps to be running ...
	I0930 11:31:40.219213   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:31:40.219257   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:31:40.234570   34720 system_svc.go:56] duration metric: took 15.34883ms WaitForService to wait for kubelet
	I0930 11:31:40.234600   34720 kubeadm.go:582] duration metric: took 38.514901899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:31:40.234618   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:31:40.405060   34720 request.go:632] Waited for 170.372351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405138   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.405146   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.405152   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.409008   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.411040   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411072   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411093   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411098   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411104   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411112   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411118   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411123   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411130   34720 node_conditions.go:105] duration metric: took 176.506295ms to run NodePressure ...
	I0930 11:31:40.411143   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:31:40.411178   34720 start.go:255] writing updated cluster config ...
	I0930 11:31:40.413535   34720 out.go:201] 
	I0930 11:31:40.415246   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:40.415334   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.417113   34720 out.go:177] * Starting "ha-033260-m03" control-plane node in "ha-033260" cluster
	I0930 11:31:40.418650   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:31:40.418678   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:31:40.418775   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:31:40.418789   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:31:40.418878   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.419069   34720 start.go:360] acquireMachinesLock for ha-033260-m03: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:31:40.419116   34720 start.go:364] duration metric: took 28.328µs to acquireMachinesLock for "ha-033260-m03"
	I0930 11:31:40.419128   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:31:40.419133   34720 fix.go:54] fixHost starting: m03
	I0930 11:31:40.419393   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:40.419421   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:40.434730   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0930 11:31:40.435197   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:40.435685   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:40.435709   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:40.436046   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:40.436205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:40.436359   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:31:40.437971   34720 fix.go:112] recreateIfNeeded on ha-033260-m03: state=Stopped err=<nil>
	I0930 11:31:40.437995   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	W0930 11:31:40.438139   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:31:40.440134   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m03" ...
	I0930 11:31:40.441557   34720 main.go:141] libmachine: (ha-033260-m03) Calling .Start
	I0930 11:31:40.441787   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring networks are active...
	I0930 11:31:40.442656   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network default is active
	I0930 11:31:40.442963   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network mk-ha-033260 is active
	I0930 11:31:40.443304   34720 main.go:141] libmachine: (ha-033260-m03) Getting domain xml...
	I0930 11:31:40.443900   34720 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:31:41.716523   34720 main.go:141] libmachine: (ha-033260-m03) Waiting to get IP...
	I0930 11:31:41.717310   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.717755   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.717843   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.717745   36275 retry.go:31] will retry after 213.974657ms: waiting for machine to come up
	I0930 11:31:41.933006   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.933445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.933470   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.933400   36275 retry.go:31] will retry after 366.443935ms: waiting for machine to come up
	I0930 11:31:42.300826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.301240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.301268   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.301200   36275 retry.go:31] will retry after 298.736034ms: waiting for machine to come up
	I0930 11:31:42.601863   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.602344   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.602373   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.602300   36275 retry.go:31] will retry after 422.049065ms: waiting for machine to come up
	I0930 11:31:43.025989   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.026495   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.026518   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.026460   36275 retry.go:31] will retry after 501.182735ms: waiting for machine to come up
	I0930 11:31:43.529199   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.529601   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.529643   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.529556   36275 retry.go:31] will retry after 658.388185ms: waiting for machine to come up
	I0930 11:31:44.189982   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:44.190445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:44.190485   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:44.190396   36275 retry.go:31] will retry after 869.323325ms: waiting for machine to come up
	I0930 11:31:45.061299   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:45.061826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:45.061855   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:45.061762   36275 retry.go:31] will retry after 1.477543518s: waiting for machine to come up
	I0930 11:31:46.540654   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:46.541062   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:46.541088   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:46.541024   36275 retry.go:31] will retry after 1.217619831s: waiting for machine to come up
	I0930 11:31:47.760283   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:47.760670   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:47.760692   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:47.760626   36275 retry.go:31] will retry after 1.524149013s: waiting for machine to come up
	I0930 11:31:49.286693   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:49.287173   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:49.287205   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:49.287119   36275 retry.go:31] will retry after 2.547999807s: waiting for machine to come up
	I0930 11:31:51.836378   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:51.836878   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:51.836903   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:51.836847   36275 retry.go:31] will retry after 3.478582774s: waiting for machine to come up
	I0930 11:31:55.318753   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:55.319267   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:55.319288   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:55.319225   36275 retry.go:31] will retry after 4.232487143s: waiting for machine to come up
	I0930 11:31:59.554587   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555031   34720 main.go:141] libmachine: (ha-033260-m03) Found IP for machine: 192.168.39.238
	I0930 11:31:59.555054   34720 main.go:141] libmachine: (ha-033260-m03) Reserving static IP address...
	I0930 11:31:59.555067   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555464   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.555482   34720 main.go:141] libmachine: (ha-033260-m03) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"}
	I0930 11:31:59.555498   34720 main.go:141] libmachine: (ha-033260-m03) Reserved static IP address: 192.168.39.238
	I0930 11:31:59.555507   34720 main.go:141] libmachine: (ha-033260-m03) Waiting for SSH to be available...
	I0930 11:31:59.555514   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:31:59.558171   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558619   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.558660   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558780   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:31:59.558806   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:31:59.558840   34720 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:31:59.558849   34720 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:31:59.558869   34720 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:31:59.689497   34720 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 11:31:59.689854   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:31:59.690426   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:31:59.692709   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693063   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.693096   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693354   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:59.693555   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:31:59.693570   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:59.693768   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.695742   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696024   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.696050   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.696286   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696441   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696600   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.696763   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.696989   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.697005   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:31:59.810353   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:31:59.810380   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810618   34720 buildroot.go:166] provisioning hostname "ha-033260-m03"
	I0930 11:31:59.810647   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810829   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.813335   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813637   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.813661   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813848   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.814001   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814334   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.814507   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.814661   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.814672   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m03 && echo "ha-033260-m03" | sudo tee /etc/hostname
	I0930 11:31:59.949653   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m03
	
	I0930 11:31:59.949686   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.952597   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.952969   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.952992   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.953242   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.953469   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953637   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953759   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.953884   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.954062   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.954084   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:00.079890   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:00.079918   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:00.079939   34720 buildroot.go:174] setting up certificates
	I0930 11:32:00.079950   34720 provision.go:84] configureAuth start
	I0930 11:32:00.079961   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:32:00.080205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:00.082895   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083281   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.083307   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083437   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.085443   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085756   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.085776   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085897   34720 provision.go:143] copyHostCerts
	I0930 11:32:00.085925   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.085978   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:00.085987   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.086050   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:00.086121   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086137   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:00.086142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:00.086219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086243   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:00.086252   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086288   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:00.086360   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m03 san=[127.0.0.1 192.168.39.238 ha-033260-m03 localhost minikube]
	I0930 11:32:00.252602   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:00.252654   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:00.252676   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.255361   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255706   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.255731   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255860   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.255996   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.256131   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.256249   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.345059   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:00.345126   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:00.370752   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:00.370827   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:32:00.397640   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:00.397703   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:00.424094   34720 provision.go:87] duration metric: took 344.128805ms to configureAuth
	I0930 11:32:00.424128   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:00.424360   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:00.424480   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.427139   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427536   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.427563   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427770   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.427949   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428043   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428125   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.428217   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.428408   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.428424   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:00.687881   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:00.687919   34720 machine.go:96] duration metric: took 994.35116ms to provisionDockerMachine
	I0930 11:32:00.687935   34720 start.go:293] postStartSetup for "ha-033260-m03" (driver="kvm2")
	I0930 11:32:00.687950   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:00.687976   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.688322   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:00.688349   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.691216   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691735   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.691763   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691959   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.692185   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.692344   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.692469   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.781946   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:00.786396   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:00.786417   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:00.786494   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:00.786562   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:00.786571   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:00.786646   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:00.796771   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:00.822239   34720 start.go:296] duration metric: took 134.285857ms for postStartSetup
	I0930 11:32:00.822297   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.822594   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:00.822622   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.825375   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825743   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.825764   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825954   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.826142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.826331   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.826492   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.912681   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:00.912751   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:00.970261   34720 fix.go:56] duration metric: took 20.551120789s for fixHost
	I0930 11:32:00.970311   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.973284   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973694   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.973722   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973873   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.974035   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974161   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974267   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.974426   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.974622   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.974633   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:01.099052   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695921.066520843
	
	I0930 11:32:01.099078   34720 fix.go:216] guest clock: 1727695921.066520843
	I0930 11:32:01.099089   34720 fix.go:229] Guest: 2024-09-30 11:32:01.066520843 +0000 UTC Remote: 2024-09-30 11:32:00.970290394 +0000 UTC m=+397.697421093 (delta=96.230449ms)
	I0930 11:32:01.099110   34720 fix.go:200] guest clock delta is within tolerance: 96.230449ms
	I0930 11:32:01.099117   34720 start.go:83] releasing machines lock for "ha-033260-m03", held for 20.679993634s
	I0930 11:32:01.099137   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.099384   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:01.102141   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.102593   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.102620   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.104827   34720 out.go:177] * Found network options:
	I0930 11:32:01.106181   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3
	W0930 11:32:01.107308   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.107329   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.107343   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.107885   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108079   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108167   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:01.108222   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:32:01.108292   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.108316   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.108408   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:01.108430   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:01.111240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111542   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111663   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111698   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111858   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.111861   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111893   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.112028   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.112064   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112182   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112189   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112347   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.112360   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112529   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.339136   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:01.345573   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:01.345659   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:01.362608   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:01.362630   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:01.362686   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:01.381024   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:01.396259   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:01.396333   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:01.412406   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:01.429258   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:01.562463   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:01.730591   34720 docker.go:233] disabling docker service ...
	I0930 11:32:01.730664   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:01.755797   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:01.769489   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:01.890988   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:02.019465   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:02.036168   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:02.059913   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:02.059981   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.072160   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:02.072247   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.084599   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.096290   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.108573   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:02.120977   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.132246   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.150591   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.162524   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:02.173575   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:02.173660   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:02.188268   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:02.199979   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:02.326960   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:02.439885   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:02.439960   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:02.446734   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:02.446849   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:02.451344   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:02.492029   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:02.492116   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.521734   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.556068   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:02.557555   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:02.558901   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:02.560920   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:02.563759   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564191   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:02.564218   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564482   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:02.569571   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:02.585245   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:02.585463   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:02.585746   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.585790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.617422   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0930 11:32:02.617831   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.618295   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.618314   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.618694   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.618907   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:02.621016   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:02.621337   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.621378   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.636969   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46463
	I0930 11:32:02.637538   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.638051   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.638068   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.638431   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.638769   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:02.639005   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.238
	I0930 11:32:02.639018   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:02.639031   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:02.639158   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:02.639204   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:02.639213   34720 certs.go:256] generating profile certs ...
	I0930 11:32:02.639277   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:32:02.639334   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37
	I0930 11:32:02.639369   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:32:02.639382   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:02.639398   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:02.639410   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:02.639423   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:02.639436   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:32:02.639451   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:32:02.639464   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:32:02.639477   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:32:02.639526   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:02.639556   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:02.639565   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:02.639587   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:02.639609   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:02.639654   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:02.639691   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:02.639715   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:02.639728   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:02.639740   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:02.639764   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:32:02.643357   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.643807   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:32:02.643839   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.644023   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:32:02.644227   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:32:02.644414   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:32:02.644553   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:32:02.726043   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:32:02.732664   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:32:02.744611   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:32:02.750045   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:32:02.763417   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:32:02.768220   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:32:02.780605   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:32:02.786158   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:32:02.802503   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:32:02.809377   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:32:02.821900   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:32:02.827740   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:32:02.842110   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:02.872987   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:02.903102   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:02.932917   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:02.966742   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:32:02.995977   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:32:03.025802   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:32:03.057227   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:32:03.085425   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:03.115042   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:03.142328   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:03.168248   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:32:03.189265   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:32:03.208719   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:32:03.227953   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:32:03.248805   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:32:03.268786   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:32:03.288511   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:32:03.309413   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:03.315862   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:03.328610   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333839   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333909   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.340595   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:03.353343   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:03.364689   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369580   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369669   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.376067   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:03.388290   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:03.400003   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405168   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405235   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.411812   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:03.424569   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:03.429588   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:32:03.436748   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:32:03.443675   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:32:03.450618   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:32:03.457889   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:32:03.464815   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:32:03.471778   34720 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.1 crio true true} ...
	I0930 11:32:03.471887   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:03.471924   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:32:03.471974   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:32:03.490629   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:32:03.490701   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:32:03.490761   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:03.502695   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:03.502771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:32:03.514300   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:03.532840   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:03.552583   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:32:03.570717   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:03.574725   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:03.588635   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.736031   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.755347   34720 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:32:03.755606   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:03.757343   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:03.758664   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.930799   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.947764   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:03.948004   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:03.948058   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:03.948281   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.948378   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:03.948390   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.948398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.948408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.951644   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.952631   34720 node_ready.go:49] node "ha-033260-m03" has status "Ready":"True"
	I0930 11:32:03.952655   34720 node_ready.go:38] duration metric: took 4.354654ms for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.952666   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:03.952741   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:03.952751   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.952758   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.952763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.959043   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:03.966223   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:03.966318   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:03.966326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.966334   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.966341   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.969582   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.970409   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:03.970425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.970433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.970436   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.973995   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.466604   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.466626   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.466634   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.466638   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.470966   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.470982   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.470989   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470994   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.473518   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:04.966613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.966634   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.966642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.966647   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.970295   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.971225   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.971247   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.971256   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.971267   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.974506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.466575   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.466597   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.466605   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.466609   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.471476   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.472347   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.472369   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.472379   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.472385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.476605   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.966462   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.966484   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.966495   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.966499   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.970347   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.971438   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.971455   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.971465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.971469   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.975635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.976454   34720 pod_ready.go:103] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:06.466781   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.466807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.466818   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.466825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.470300   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.471083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.471100   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.471108   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.471111   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.474455   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.966864   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.966887   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.966895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.966899   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.970946   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:06.971993   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.972007   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.972014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.972021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.975563   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.466626   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.466651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.466664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.466671   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.471030   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:07.471751   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.471767   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.471775   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.471780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.475078   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.966446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.966464   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.966472   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.966476   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.970130   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.970892   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.970907   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.970916   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.970921   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.974558   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.467355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:08.467382   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.467392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.467398   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.491602   34720 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0930 11:32:08.492458   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.492478   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.492488   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.492494   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.504709   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:32:08.505926   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.505961   34720 pod_ready.go:82] duration metric: took 4.539705143s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.505976   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.506053   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:08.506070   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.506079   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.506091   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.513015   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:08.514472   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.514492   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.514500   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.514504   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.522097   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:08.522597   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.522620   34720 pod_ready.go:82] duration metric: took 16.634648ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522632   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522710   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:08.522720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.522730   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.522736   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.528114   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:08.529205   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.529222   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.529239   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.529245   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.532511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.533059   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.533085   34720 pod_ready.go:82] duration metric: took 10.444686ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533097   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:08.533175   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.533185   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.533194   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.536360   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.537030   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:08.537046   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.537054   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.537058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.540241   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.540684   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.540702   34720 pod_ready.go:82] duration metric: took 7.598243ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540712   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540774   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:08.540782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.540789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.540794   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.544599   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.545135   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:08.545150   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.545158   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.545161   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.548627   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.041691   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.041715   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.041724   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.041728   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.045686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.046390   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.046409   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.046420   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.046428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.050351   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.541239   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.541273   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.541285   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.541291   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.544605   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.545287   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.545303   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.545311   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.545314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.548353   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.041331   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.041356   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.041368   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.041373   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.045200   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.046010   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.046031   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.046039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.046046   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.049179   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.541488   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.541515   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.541528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.541536   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.545641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:10.546377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.546400   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.546407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.546410   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.549732   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.550616   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:11.040952   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.040974   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.040982   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.040989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.046528   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:11.047555   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.047571   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.047581   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.047586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.051499   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:11.541109   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.541139   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.541149   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.541154   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.545483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:11.546103   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.546119   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.546130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.546136   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.549272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:12.041130   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.041165   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.041176   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.041182   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.045465   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.046261   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.046277   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.046284   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.046289   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.054233   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:12.540971   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.540992   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.541000   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.541004   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.545075   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.545773   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.545789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.545799   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.545805   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.549003   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.041785   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.041807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.041817   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.041823   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.045506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.046197   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.046214   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.046221   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.046241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.048544   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:13.048911   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:13.541700   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.541728   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.541740   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.541748   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.545726   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.546727   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.546742   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.546749   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.546753   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.549687   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:14.041571   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.041593   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.041601   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.041605   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.045629   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.047164   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.047185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.047199   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.047203   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.052005   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:14.541017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.541043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.541055   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.541060   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.545027   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.546245   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.546266   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.546275   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.546280   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.549572   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.041446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.041468   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.041477   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.041481   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.045111   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.045983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.046004   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.046014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.046021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.055916   34720 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0930 11:32:15.056489   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:15.541417   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.541448   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.541460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.541465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.544952   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.545764   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.545781   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.545790   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.545795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.552050   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:16.040979   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.041003   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.041011   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.041016   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.045765   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:16.046411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.046427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.046435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.046439   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.056745   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:32:16.541660   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.541682   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.541692   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.541696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.545213   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:16.546092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.546110   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.546121   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.546126   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.548900   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.041375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.041399   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.041411   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.041417   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.045641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:17.046588   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.046611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.046621   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.046628   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.049632   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.541651   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.541676   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.541686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.541692   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.545407   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:17.546246   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.546269   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.546282   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.546290   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.549117   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.549778   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:18.041518   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.041556   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.041568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.041576   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:18.046748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.046769   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.046780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046787   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.052283   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:18.541399   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.541425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.541433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.541437   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.545011   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:18.546056   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.546078   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.546089   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.546097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.549203   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.041166   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.041201   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.041210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.041214   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.045755   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.046481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.046500   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.046510   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.046517   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.049924   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.541836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.541873   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.541885   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.541893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.546183   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.547097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.547116   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.547126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.547130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.551235   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.551688   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:20.041000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.041027   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.041039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.041053   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.045149   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.045912   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.045934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.045945   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.045950   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.049525   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:20.541792   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.541813   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.541821   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.541825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.546083   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.546947   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.546969   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.546980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.546988   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.551303   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:21.041910   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.041938   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.041950   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.041955   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.047824   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:21.048523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.048544   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.048555   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.048560   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.051690   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.541671   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.541695   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.541707   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.541714   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.545187   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.545925   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.545943   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.545957   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.549146   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.040908   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.040934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.040944   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.040949   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.044322   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.045253   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.045275   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.045286   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.045311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.048540   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.049217   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:22.541377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.541397   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.541405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.596027   34720 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0930 11:32:22.596840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.596858   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.596868   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.596876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.600101   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.041796   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.041817   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.041826   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.041830   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.046144   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:23.047374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.047396   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.047407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.047412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.051210   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.541365   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.541391   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.541403   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.544624   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.545332   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.545348   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.545356   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.545362   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.548076   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.040942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.040985   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.040995   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.040999   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.044909   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.045625   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.045642   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.045653   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.045658   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.048446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.541477   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.541497   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.541506   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.541509   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.545585   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:24.546447   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.546460   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.546468   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.546472   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.549497   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.550184   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:25.041599   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.041635   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.041645   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.041651   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.048106   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:25.048975   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.048998   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.049008   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.049013   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.054165   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:25.541178   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.541223   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.541235   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.541241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.545143   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:25.545923   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.545941   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.545962   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.549975   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.041161   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.041185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.041193   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.041199   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.045231   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:26.046025   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.046042   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.046049   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.046055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.048864   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:26.541487   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.541511   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.541521   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.541528   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.548114   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:26.548980   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.548993   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.549001   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.549005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.552757   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.553360   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:27.041590   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.041611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.041636   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.041639   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.046112   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:27.047076   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.047092   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.047100   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.047104   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.052347   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:27.541767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.541789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.541797   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.541801   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.545090   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:27.545664   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.545678   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.545686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.545690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.548839   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.041179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.041200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.041212   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.041217   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.046396   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:28.047355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.047372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.047384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.047388   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.053891   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:28.541237   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.541259   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.541268   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.541271   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545192   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.545941   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.545959   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.545967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.549204   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.550435   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.550457   34720 pod_ready.go:82] duration metric: took 20.009736872s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550559   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:32:28.550570   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.550580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.550590   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.553686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.554394   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:28.554407   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.554414   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.554420   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.556924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.557578   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.557600   34720 pod_ready.go:82] duration metric: took 7.108562ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557612   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557692   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:32:28.557702   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.557712   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.557722   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.560446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.561014   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:28.561029   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.561036   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.561040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.563867   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.564450   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.564468   34720 pod_ready.go:82] duration metric: took 6.836659ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:28.564568   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.564578   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.564586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.567937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.568639   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.568653   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.568661   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.568664   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.571277   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:29.065431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.065458   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.065466   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.065469   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.069406   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.070004   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.070020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.070028   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.070033   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.073076   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.565018   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.565043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.565052   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.565055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.568350   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.569071   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.569090   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.569101   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.569107   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.572794   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.065688   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.065710   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.065717   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.065721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.069593   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.070370   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.070385   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.070393   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.070397   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.073099   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.565351   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.565372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.565380   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.565385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.568480   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.569460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.569481   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.569489   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.569493   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.572043   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.572542   34720 pod_ready.go:103] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:31.064934   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:31.064954   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.064963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.064967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.069154   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:31.070615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.070631   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.070642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.070648   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.073638   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.074233   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.074258   34720 pod_ready.go:82] duration metric: took 2.50976614s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074273   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:32:31.074392   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.074418   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.074427   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.077429   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.078309   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:31.078326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.078336   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.078343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.080937   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.081321   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.081341   34720 pod_ready.go:82] duration metric: took 7.059128ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081353   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081418   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:32:31.081428   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.081438   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.081447   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.084351   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.084930   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:31.084944   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.084951   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.084956   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.087905   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.088473   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.088493   34720 pod_ready.go:82] duration metric: took 7.129947ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.088504   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.141826   34720 request.go:632] Waited for 53.255293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141907   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141915   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.141924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.141929   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.145412   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.341415   34720 request.go:632] Waited for 195.313156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341506   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.341520   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.341524   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.344937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.589605   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.589637   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.589646   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.589651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.593330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.741775   34720 request.go:632] Waited for 147.33103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741847   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.741857   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.741869   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.745796   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.089735   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.089761   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.089772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.089776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.093492   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.141705   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.141744   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.141752   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.141757   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.145662   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.589384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.589408   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.589418   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.589426   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.592976   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.593954   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.593971   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.593979   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.593983   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.597157   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.089690   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:33.089720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.089733   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.089743   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.094817   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:33.095412   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:33.095427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.095435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.095442   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.098967   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.099551   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.099569   34720 pod_ready.go:82] duration metric: took 2.011056626s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.099580   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.141920   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:32:33.141953   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.141961   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.141965   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.146176   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:33.342278   34720 request.go:632] Waited for 195.329061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342343   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342351   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.342362   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.342368   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.346051   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.346626   34720 pod_ready.go:98] node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346650   34720 pod_ready.go:82] duration metric: took 247.062207ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	E0930 11:32:33.346662   34720 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346673   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.541732   34720 request.go:632] Waited for 194.984853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541823   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541832   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.541839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.541846   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.545738   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.741681   34720 request.go:632] Waited for 195.307104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741746   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741753   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.741839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.741853   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.745711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.746422   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.746442   34720 pod_ready.go:82] duration metric: took 399.762428ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.746454   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.941491   34720 request.go:632] Waited for 194.974915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941575   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.941582   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.941585   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.945250   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.142081   34720 request.go:632] Waited for 196.05781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142187   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142199   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.142207   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.142211   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.146079   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.146737   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.146756   34720 pod_ready.go:82] duration metric: took 400.295141ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.146770   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.342040   34720 request.go:632] Waited for 195.196365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342146   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342159   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.342171   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.342181   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.345711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.541794   34720 request.go:632] Waited for 195.310617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541870   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541876   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.541884   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.541889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.545585   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.546141   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.546158   34720 pod_ready.go:82] duration metric: took 399.379827ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.546174   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.742192   34720 request.go:632] Waited for 195.896441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742266   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742272   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.742279   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.742283   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.745382   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.941671   34720 request.go:632] Waited for 195.443927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941750   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941755   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.941763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.941767   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.945425   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.946182   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.946207   34720 pod_ready.go:82] duration metric: took 400.022007ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.946220   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.142264   34720 request.go:632] Waited for 195.977294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142349   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142355   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.142363   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.142372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.146093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.342119   34720 request.go:632] Waited for 195.354718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342174   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342179   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.342185   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.342189   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.345678   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.346226   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.346244   34720 pod_ready.go:82] duration metric: took 400.013115ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.346253   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.541907   34720 request.go:632] Waited for 195.545182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541986   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541995   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.542006   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.542018   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.545604   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.741571   34720 request.go:632] Waited for 195.370489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741659   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741667   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.741678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.741690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.745574   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.746159   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.746179   34720 pod_ready.go:82] duration metric: took 399.919057ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.746193   34720 pod_ready.go:39] duration metric: took 31.793515417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:35.746211   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:32:35.746295   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:32:35.770439   34720 api_server.go:72] duration metric: took 32.015036347s to wait for apiserver process to appear ...
	I0930 11:32:35.770467   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:32:35.770491   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:32:35.775724   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:32:35.775811   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:32:35.775820   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.775829   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.775838   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.776730   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:32:35.776791   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:32:35.776806   34720 api_server.go:131] duration metric: took 6.332786ms to wait for apiserver health ...
	I0930 11:32:35.776814   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:32:35.942219   34720 request.go:632] Waited for 165.338166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942284   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942290   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.942302   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.942308   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.948613   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:35.956880   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:32:35.956918   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:35.956927   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:35.956932   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:35.956938   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:35.956942   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:35.956947   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:35.956951   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:35.956956   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:35.956960   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:35.956965   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:35.956971   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:35.956977   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:35.956988   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:35.956996   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:35.957001   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:35.957009   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:35.957014   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:35.957019   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:35.957027   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:35.957033   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:35.957041   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:35.957046   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:35.957053   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:35.957058   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:35.957066   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:35.957070   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:35.957081   34720 system_pods.go:74] duration metric: took 180.260558ms to wait for pod list to return data ...
	I0930 11:32:35.957093   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:32:36.141557   34720 request.go:632] Waited for 184.369505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141646   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141655   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.141664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.141669   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.146009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.146146   34720 default_sa.go:45] found service account: "default"
	I0930 11:32:36.146163   34720 default_sa.go:55] duration metric: took 189.061389ms for default service account to be created ...
	I0930 11:32:36.146176   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:32:36.341683   34720 request.go:632] Waited for 195.43917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341772   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.341789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.341795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.348026   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:36.355936   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:32:36.355974   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:36.355980   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:36.355985   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:36.355989   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:36.355993   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:36.355997   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:36.356000   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:36.356003   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:36.356007   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:36.356011   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:36.356015   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:36.356019   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:36.356022   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:36.356025   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:36.356028   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:36.356031   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:36.356034   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:36.356037   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:36.356041   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:36.356044   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:36.356050   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:36.356053   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:36.356059   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:36.356062   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:36.356065   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:36.356068   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:36.356075   34720 system_pods.go:126] duration metric: took 209.893533ms to wait for k8s-apps to be running ...
	I0930 11:32:36.356084   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:32:36.356128   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:32:36.376905   34720 system_svc.go:56] duration metric: took 20.807413ms WaitForService to wait for kubelet
	I0930 11:32:36.376934   34720 kubeadm.go:582] duration metric: took 32.621540674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:32:36.376952   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:32:36.541278   34720 request.go:632] Waited for 164.265532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541328   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541345   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.541372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.541378   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.545532   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.546930   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546950   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546960   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546964   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546970   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546975   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546980   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546984   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546989   34720 node_conditions.go:105] duration metric: took 170.032136ms to run NodePressure ...
	I0930 11:32:36.547003   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:32:36.547027   34720 start.go:255] writing updated cluster config ...
	I0930 11:32:36.548771   34720 out.go:201] 
	I0930 11:32:36.549990   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:36.550071   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.551533   34720 out.go:177] * Starting "ha-033260-m04" worker node in "ha-033260" cluster
	I0930 11:32:36.552654   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:32:36.552671   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:32:36.552768   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:32:36.552782   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:32:36.552887   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.553084   34720 start.go:360] acquireMachinesLock for ha-033260-m04: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:32:36.553130   34720 start.go:364] duration metric: took 26.329µs to acquireMachinesLock for "ha-033260-m04"
	I0930 11:32:36.553148   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:32:36.553160   34720 fix.go:54] fixHost starting: m04
	I0930 11:32:36.553451   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:36.553481   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:36.569922   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0930 11:32:36.570471   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:36.571044   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:36.571066   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:36.571377   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:36.571578   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:36.571759   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:32:36.573541   34720 fix.go:112] recreateIfNeeded on ha-033260-m04: state=Stopped err=<nil>
	I0930 11:32:36.573570   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	W0930 11:32:36.573771   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:32:36.575555   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m04" ...
	I0930 11:32:36.576772   34720 main.go:141] libmachine: (ha-033260-m04) Calling .Start
	I0930 11:32:36.576973   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring networks are active...
	I0930 11:32:36.577708   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network default is active
	I0930 11:32:36.578046   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network mk-ha-033260 is active
	I0930 11:32:36.578396   34720 main.go:141] libmachine: (ha-033260-m04) Getting domain xml...
	I0930 11:32:36.579052   34720 main.go:141] libmachine: (ha-033260-m04) Creating domain...
	I0930 11:32:37.876264   34720 main.go:141] libmachine: (ha-033260-m04) Waiting to get IP...
	I0930 11:32:37.877213   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:37.877645   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:37.877707   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:37.877598   36596 retry.go:31] will retry after 232.490022ms: waiting for machine to come up
	I0930 11:32:38.112070   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.112572   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.112594   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.112550   36596 retry.go:31] will retry after 256.882229ms: waiting for machine to come up
	I0930 11:32:38.371192   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.371815   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.371840   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.371754   36596 retry.go:31] will retry after 461.059855ms: waiting for machine to come up
	I0930 11:32:38.834060   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.834574   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.834602   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.834535   36596 retry.go:31] will retry after 561.972608ms: waiting for machine to come up
	I0930 11:32:39.398393   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:39.398837   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:39.398861   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:39.398804   36596 retry.go:31] will retry after 603.760478ms: waiting for machine to come up
	I0930 11:32:40.004623   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.004981   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.005003   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.004944   36596 retry.go:31] will retry after 795.659949ms: waiting for machine to come up
	I0930 11:32:40.802044   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.802482   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.802507   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.802432   36596 retry.go:31] will retry after 876.600506ms: waiting for machine to come up
	I0930 11:32:41.680956   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:41.681439   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:41.681475   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:41.681410   36596 retry.go:31] will retry after 1.356578507s: waiting for machine to come up
	I0930 11:32:43.039790   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:43.040245   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:43.040273   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:43.040181   36596 retry.go:31] will retry after 1.138308059s: waiting for machine to come up
	I0930 11:32:44.180454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:44.180880   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:44.180912   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:44.180838   36596 retry.go:31] will retry after 1.724095206s: waiting for machine to come up
	I0930 11:32:45.906969   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:45.907551   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:45.907580   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:45.907505   36596 retry.go:31] will retry after 2.79096153s: waiting for machine to come up
	I0930 11:32:48.699904   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:48.700403   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:48.700433   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:48.700358   36596 retry.go:31] will retry after 2.880773223s: waiting for machine to come up
	I0930 11:32:51.582182   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:51.582528   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:51.582553   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:51.582515   36596 retry.go:31] will retry after 3.567167233s: waiting for machine to come up
	I0930 11:32:55.151238   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.151679   34720 main.go:141] libmachine: (ha-033260-m04) Found IP for machine: 192.168.39.104
	I0930 11:32:55.151704   34720 main.go:141] libmachine: (ha-033260-m04) Reserving static IP address...
	I0930 11:32:55.151717   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has current primary IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.152141   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.152161   34720 main.go:141] libmachine: (ha-033260-m04) Reserved static IP address: 192.168.39.104
	I0930 11:32:55.152180   34720 main.go:141] libmachine: (ha-033260-m04) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"}
	I0930 11:32:55.152198   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Getting to WaitForSSH function...
	I0930 11:32:55.152212   34720 main.go:141] libmachine: (ha-033260-m04) Waiting for SSH to be available...
	I0930 11:32:55.154601   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.154955   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.154984   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.155062   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH client type: external
	I0930 11:32:55.155094   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa (-rw-------)
	I0930 11:32:55.155127   34720 main.go:141] libmachine: (ha-033260-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:32:55.155140   34720 main.go:141] libmachine: (ha-033260-m04) DBG | About to run SSH command:
	I0930 11:32:55.155169   34720 main.go:141] libmachine: (ha-033260-m04) DBG | exit 0
	I0930 11:32:55.282203   34720 main.go:141] libmachine: (ha-033260-m04) DBG | SSH cmd err, output: <nil>: 
	I0930 11:32:55.282534   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetConfigRaw
	I0930 11:32:55.283161   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.286073   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286485   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.286510   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286784   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:55.287029   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:32:55.287049   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:55.287272   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.289455   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.289920   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.289948   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.290156   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.290326   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290453   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290576   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.290707   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.290900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.290913   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:32:55.398165   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:32:55.398197   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398448   34720 buildroot.go:166] provisioning hostname "ha-033260-m04"
	I0930 11:32:55.398492   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398697   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.401792   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.402275   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402458   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.402629   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402793   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402918   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.403113   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.403282   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.403294   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m04 && echo "ha-033260-m04" | sudo tee /etc/hostname
	I0930 11:32:55.531966   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m04
	
	I0930 11:32:55.531997   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.535254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535632   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.535675   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535815   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.536008   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536169   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536305   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.536447   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.536613   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.536629   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:55.658892   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:55.658919   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:55.658936   34720 buildroot.go:174] setting up certificates
	I0930 11:32:55.658945   34720 provision.go:84] configureAuth start
	I0930 11:32:55.658953   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.659243   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.662312   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662773   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.662798   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662957   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.665302   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665663   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.665690   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665764   34720 provision.go:143] copyHostCerts
	I0930 11:32:55.665796   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665833   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:55.665842   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665927   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:55.666021   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666039   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:55.666047   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666074   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:55.666119   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666136   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:55.666142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:55.666213   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m04 san=[127.0.0.1 192.168.39.104 ha-033260-m04 localhost minikube]
	I0930 11:32:55.889392   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:55.889469   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:55.889499   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.892080   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892386   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.892413   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892551   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.892776   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.892978   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.893178   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:55.976164   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:55.976265   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:56.003465   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:56.003537   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:32:56.030648   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:56.030726   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:56.059845   34720 provision.go:87] duration metric: took 400.888299ms to configureAuth
	I0930 11:32:56.059878   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:56.060173   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:56.060271   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.063160   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063613   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.063639   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063847   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.064052   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064240   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064367   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.064511   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.064690   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.064709   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:56.291657   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:56.291682   34720 machine.go:96] duration metric: took 1.004640971s to provisionDockerMachine
	I0930 11:32:56.291696   34720 start.go:293] postStartSetup for "ha-033260-m04" (driver="kvm2")
	I0930 11:32:56.291709   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:56.291730   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.292023   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:56.292057   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.294563   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.294915   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.294940   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.295103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.295280   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.295424   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.295532   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.385215   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:56.389877   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:56.389903   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:56.389972   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:56.390073   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:56.390086   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:56.390178   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:56.400442   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:56.429361   34720 start.go:296] duration metric: took 137.644684ms for postStartSetup
	I0930 11:32:56.429427   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.429716   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:56.429741   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.432628   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433129   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.433159   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433319   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.433495   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.433694   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.433867   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.520351   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:56.520411   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:56.579433   34720 fix.go:56] duration metric: took 20.026269147s for fixHost
	I0930 11:32:56.579489   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.582670   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583091   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.583121   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583274   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.583494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583682   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583865   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.584063   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.584279   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.584294   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:56.698854   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695976.655532462
	
	I0930 11:32:56.698887   34720 fix.go:216] guest clock: 1727695976.655532462
	I0930 11:32:56.698900   34720 fix.go:229] Guest: 2024-09-30 11:32:56.655532462 +0000 UTC Remote: 2024-09-30 11:32:56.579461897 +0000 UTC m=+453.306592605 (delta=76.070565ms)
	I0930 11:32:56.698920   34720 fix.go:200] guest clock delta is within tolerance: 76.070565ms
	I0930 11:32:56.698927   34720 start.go:83] releasing machines lock for "ha-033260-m04", held for 20.145784895s
	I0930 11:32:56.698949   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.699224   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:56.702454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.702852   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.702883   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.705376   34720 out.go:177] * Found network options:
	I0930 11:32:56.706947   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	W0930 11:32:56.708247   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708274   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708287   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.708308   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.708969   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709162   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709279   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:56.709323   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	W0930 11:32:56.709360   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709386   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709401   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.709475   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:56.709494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.712173   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712335   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712568   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712592   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712731   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.712845   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712870   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712874   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.712987   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.713033   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.713168   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.713207   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713330   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.934813   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:56.941348   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:56.941419   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:56.960961   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:56.960992   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:56.961081   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:56.980594   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:56.996216   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:56.996273   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:57.013214   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:57.028755   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:57.149354   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:57.318133   34720 docker.go:233] disabling docker service ...
	I0930 11:32:57.318197   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:57.334364   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:57.349711   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:57.496565   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:57.627318   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:57.643513   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:57.667655   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:57.667720   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.680838   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:57.680907   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.693421   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.705291   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.717748   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:57.730805   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.742351   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.761934   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.773112   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:57.783201   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:57.783257   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:57.797812   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:57.813538   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:57.938077   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:58.044521   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:58.044587   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:58.049533   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:58.049596   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:58.053988   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:58.101662   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:58.101732   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.132323   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.163597   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:58.164981   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:58.166271   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:58.167862   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	I0930 11:32:58.169165   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:58.172162   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172529   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:58.172550   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172762   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:58.178993   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.194096   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:58.194385   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.194741   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.194790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.210665   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0930 11:32:58.211101   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.211610   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.211628   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.211954   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.212130   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:58.213485   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:58.213820   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.213854   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.228447   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
	I0930 11:32:58.228877   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.229355   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.229373   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.229837   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.230027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:58.230180   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.104
	I0930 11:32:58.230191   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:58.230204   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:58.230340   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:58.230387   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:58.230397   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:58.230409   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:58.230422   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:58.230434   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:58.230491   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:58.230521   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:58.230531   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:58.230554   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:58.230577   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:58.230597   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:58.230650   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:58.230688   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.230705   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.230732   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.230759   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:58.258115   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:58.284212   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:58.311332   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:58.336428   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:58.362719   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:58.389689   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:58.416593   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:58.423417   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:58.435935   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442361   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442428   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.448829   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:58.461056   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:58.473436   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478046   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478120   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.484917   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:58.497497   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:58.509506   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514695   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514766   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.521000   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:58.533195   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:58.538066   34720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:32:58.538108   34720 kubeadm.go:934] updating node {m04 192.168.39.104 0 v1.31.1 crio false true} ...
	I0930 11:32:58.538196   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:58.538246   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:58.549564   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:58.549678   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0930 11:32:58.561086   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:58.581046   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:58.599680   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:58.603972   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.618040   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.758745   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.778316   34720 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0930 11:32:58.778666   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.780417   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:58.781848   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.954652   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.980788   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:58.981140   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:58.981229   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:58.981531   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:58.981654   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:58.981668   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:58.981678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:58.981682   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:58.985441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.482501   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:59.482522   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.482530   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.482534   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.485809   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.486316   34720 node_ready.go:49] node "ha-033260-m04" has status "Ready":"True"
	I0930 11:32:59.486339   34720 node_ready.go:38] duration metric: took 504.792648ms for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:59.486347   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:59.486421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:59.486437   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.486444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.486448   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.491643   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:59.500880   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.501000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:59.501020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.501033   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.501040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.504511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.505105   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.505120   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.505126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.505130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.508330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.508816   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.508834   34720 pod_ready.go:82] duration metric: took 7.916953ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508846   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508911   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:59.508921   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.508931   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.508940   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.512254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.513133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.513147   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.513157   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.513162   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.516730   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.517273   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.517290   34720 pod_ready.go:82] duration metric: took 8.437165ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517301   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517361   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:59.517370   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.517380   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.517387   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521073   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.521748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.521764   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.521772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.524702   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.525300   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.525316   34720 pod_ready.go:82] duration metric: took 8.008761ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525325   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:59.525383   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.525390   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.525393   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.528314   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.528898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:59.528914   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.528924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.528930   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.531717   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.532229   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.532246   34720 pod_ready.go:82] duration metric: took 6.914296ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.532257   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.682582   34720 request.go:632] Waited for 150.25854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682645   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.682658   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.682662   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.689539   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:59.883130   34720 request.go:632] Waited for 192.41473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.883210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.883232   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.887618   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:59.888108   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.888129   34720 pod_ready.go:82] duration metric: took 355.865471ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.888150   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.083448   34720 request.go:632] Waited for 195.22183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083541   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083549   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.083560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.083571   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.087440   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.283491   34720 request.go:632] Waited for 195.322885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.283590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.283596   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.287218   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.287959   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.287982   34720 pod_ready.go:82] duration metric: took 399.823014ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.287995   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.483353   34720 request.go:632] Waited for 195.279455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483436   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483446   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.483457   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.483468   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.487640   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:00.682537   34720 request.go:632] Waited for 194.177349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682623   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.682632   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.682641   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.686128   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.686721   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.686744   34720 pod_ready.go:82] duration metric: took 398.740461ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.686757   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.882895   34720 request.go:632] Waited for 196.06624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882956   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.882963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.882967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.887704   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.082816   34720 request.go:632] Waited for 194.378573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082908   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.082920   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.082928   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.086938   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.088023   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.088045   34720 pod_ready.go:82] duration metric: took 401.279304ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.088058   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.283083   34720 request.go:632] Waited for 194.957282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283183   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283198   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.283211   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.283221   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.288754   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:33:01.482812   34720 request.go:632] Waited for 193.21938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482876   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482883   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.482895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.482906   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.487184   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.488013   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.488035   34720 pod_ready.go:82] duration metric: took 399.968755ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.488047   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.682796   34720 request.go:632] Waited for 194.675415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682878   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682885   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.682895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.682903   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.687354   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.883473   34720 request.go:632] Waited for 195.37133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883544   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883551   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.883560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.883565   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.887254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.887998   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.888020   34720 pod_ready.go:82] duration metric: took 399.964872ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.888033   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.082969   34720 request.go:632] Waited for 194.870325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083045   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083051   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.083059   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.083071   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.087791   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.283169   34720 request.go:632] Waited for 194.361368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283289   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283304   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.283331   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.283350   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.289541   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:02.290706   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:02.290729   34720 pod_ready.go:82] duration metric: took 402.687198ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.290741   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.483158   34720 request.go:632] Waited for 192.351675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483216   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483222   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.483229   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.483233   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.487135   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:02.683325   34720 request.go:632] Waited for 195.063306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683451   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683485   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.683516   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.683525   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.687678   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.883237   34720 request.go:632] Waited for 92.265907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883323   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883335   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.883343   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.883351   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.887580   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.082785   34720 request.go:632] Waited for 194.294379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082857   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082862   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.082872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.082876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.086700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.291740   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:03.291767   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.291777   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.291783   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.295392   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.483576   34720 request.go:632] Waited for 187.437599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483647   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483655   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.483667   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.483677   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.487588   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.488048   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.488067   34720 pod_ready.go:82] duration metric: took 1.197317957s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.488076   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.683488   34720 request.go:632] Waited for 195.341906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.683590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.683597   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.687625   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.882797   34720 request.go:632] Waited for 194.279012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882884   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882896   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.882906   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.882924   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.886967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.887827   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.887857   34720 pod_ready.go:82] duration metric: took 399.773896ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.887870   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.082926   34720 request.go:632] Waited for 194.972094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083025   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.083037   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.083041   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.087402   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.283534   34720 request.go:632] Waited for 194.922082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283619   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.283626   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.283630   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.287420   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:04.288067   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.288124   34720 pod_ready.go:82] duration metric: took 400.245815ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.288141   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.483212   34720 request.go:632] Waited for 194.995215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483277   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483290   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.483319   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.483325   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.487831   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.682773   34720 request.go:632] Waited for 194.183233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682843   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.682854   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.682858   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.686967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.687793   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.687819   34720 pod_ready.go:82] duration metric: took 399.669055ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.687836   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.882848   34720 request.go:632] Waited for 194.931159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882922   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882930   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.882942   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.882951   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.886911   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.083280   34720 request.go:632] Waited for 195.375329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083376   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083387   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.083398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.083407   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.086880   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.087419   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.087441   34720 pod_ready.go:82] duration metric: took 399.596031ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.087453   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.282500   34720 request.go:632] Waited for 194.956546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282556   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282561   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.282568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.282582   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.285978   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.482968   34720 request.go:632] Waited for 196.156247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483125   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483139   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.483149   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.483155   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.489591   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:05.490240   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.490263   34720 pod_ready.go:82] duration metric: took 402.801252ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.490276   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.683160   34720 request.go:632] Waited for 192.80812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683317   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683345   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.683360   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.683366   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.687330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.883447   34720 request.go:632] Waited for 195.335552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883530   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.883545   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.883553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.887272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.888002   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.888020   34720 pod_ready.go:82] duration metric: took 397.737135ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.888031   34720 pod_ready.go:39] duration metric: took 6.401673703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:33:05.888048   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:33:05.888099   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:33:05.905331   34720 system_svc.go:56] duration metric: took 17.278667ms WaitForService to wait for kubelet
	I0930 11:33:05.905363   34720 kubeadm.go:582] duration metric: took 7.126999309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:33:05.905382   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:33:06.082680   34720 request.go:632] Waited for 177.227376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082733   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082739   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:06.082746   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:06.082751   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:06.087224   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:06.088896   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088918   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088929   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088932   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088935   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088939   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088942   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088945   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088948   34720 node_conditions.go:105] duration metric: took 183.562454ms to run NodePressure ...
	I0930 11:33:06.088959   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:33:06.088977   34720 start.go:255] writing updated cluster config ...
	I0930 11:33:06.089268   34720 ssh_runner.go:195] Run: rm -f paused
	I0930 11:33:06.143377   34720 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:33:06.145486   34720 out.go:177] * Done! kubectl is now configured to use "ha-033260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.099524011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988099499811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3708d53-121d-472a-8e6c-7648f9feef66 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.100093097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=793d3fd3-48f2-4467-b2e2-febfa6791c3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.100150140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=793d3fd3-48f2-4467-b2e2-febfa6791c3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.100491197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=793d3fd3-48f2-4467-b2e2-febfa6791c3e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.138973481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf1e3c7f-108c-466a-bdab-903f8c0b235d name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.139047862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf1e3c7f-108c-466a-bdab-903f8c0b235d name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.140160779Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b188b8a8-3fda-4a67-ad71-db54d3acd614 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.140679878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988140652914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b188b8a8-3fda-4a67-ad71-db54d3acd614 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.141201880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dc71bd5-ef58-4eaf-97d2-e9ffa1183c4f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.141260766Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dc71bd5-ef58-4eaf-97d2-e9ffa1183c4f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.141589833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dc71bd5-ef58-4eaf-97d2-e9ffa1183c4f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.188262609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8af7a66-8c06-4a3d-b16b-83a80e13497f name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.188393114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8af7a66-8c06-4a3d-b16b-83a80e13497f name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.189499548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80abc763-5c5e-425f-9ec9-b6b354e4c97d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.189950113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988189924729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80abc763-5c5e-425f-9ec9-b6b354e4c97d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.190772633Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=302fbb9d-d205-49db-a7f3-dca5e0b67ec9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.190829298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=302fbb9d-d205-49db-a7f3-dca5e0b67ec9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.191121060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=302fbb9d-d205-49db-a7f3-dca5e0b67ec9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.236282764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b41954aa-1ad8-4a61-8cdc-66949470198b name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.236411267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b41954aa-1ad8-4a61-8cdc-66949470198b name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.237806736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a67c5e71-a47c-41ec-bc64-522198005a17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.238536257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988238216733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a67c5e71-a47c-41ec-bc64-522198005a17 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.239297502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7f20c82-7ce4-4038-9cd0-fde0f55fa313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.239443472Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7f20c82-7ce4-4038-9cd0-fde0f55fa313 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:08 ha-033260 crio[1037]: time="2024-09-30 11:33:08.241172370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7f20c82-7ce4-4038-9cd0-fde0f55fa313 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	88e9d994261ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   d40067a91d083       storage-provisioner
	df3f12d455b8e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   2                   80de34a6f14ca       busybox-7dff88458-nbhwc
	1937cce4ac070       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               2                   40863d7ac6437       kindnet-g94k6
	447147b39349f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                2                   96e86b12ad9b7       kube-proxy-mxvxr
	d33c75c18e088       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   74bab7f17b06b       coredns-7c65d6cfc9-kt87v
	88e2f3c9b905b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   f6863e18fb197       coredns-7c65d6cfc9-5frmm
	f4c792280b15b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       4                   d40067a91d083       storage-provisioner
	487866f095e01       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   4                   1eee82fccc84c       kube-controller-manager-ha-033260
	6ea8bba210502       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            4                   498808de72075       kube-apiserver-ha-033260
	bf743c3bfec10       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     2 minutes ago        Running             kube-vip                  1                   bfb2a9b6e2e5a       kube-vip-ha-033260
	91514ddf1467c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            3                   498808de72075       kube-apiserver-ha-033260
	b2e1a261e4464       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      2                   5d3f45272bb02       etcd-ha-033260
	fd2ffaa7ff33d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            2                   aeafc6ee55a4d       kube-scheduler-ha-033260
	9f9c8e0b4eb8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   3                   1eee82fccc84c       kube-controller-manager-ha-033260
	
	
	==> coredns [88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60977 - 56023 "HINFO IN 6022066924044087929.8494370084378227503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030589997s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1363673838]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.175) (total time: 30002ms):
	Trace[1363673838]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:31:59.176)
	Trace[1363673838]: [30.00230997s] [30.00230997s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1452341617]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30003ms):
	Trace[1452341617]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1452341617]: [30.0032564s] [30.0032564s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1546520065]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1546520065]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1546520065]: [30.002775951s] [30.002775951s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44743 - 60294 "HINFO IN 2203689339262482561.411210931008286347. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030703121s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469308931]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[469308931]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.176)
	Trace[469308931]: [30.002568999s] [30.002568999s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1100740362]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1100740362]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1100740362]: [30.002476509s] [30.002476509s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1653957079]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.176) (total time: 30002ms):
	Trace[1653957079]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.178)
	Trace[1653957079]: [30.002259084s] [30.002259084s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-033260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:33:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:31:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-033260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 285e64dc8d10442694303513a400e333
	  System UUID:                285e64dc-8d10-4426-9430-3513a400e333
	  Boot ID:                    819b9c53-0125-4e30-b11d-f0c734cdb490
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbhwc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-5frmm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-7c65d6cfc9-kt87v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-ha-033260                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-g94k6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-033260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-033260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-mxvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-033260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-033260                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 21m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  21m                    kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                    kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m                    kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  NodeReady                20m                    kubelet          Node ha-033260 status is now: NodeReady
	  Normal  RegisteredNode           20m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s (x7 over 2m30s)  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s                   node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           103s                   node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           44s                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	
	
	Name:               ha-033260-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:12:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:33:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-033260-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1504aa96b0e7414e83ec57ce754ea274
	  System UUID:                1504aa96-b0e7-414e-83ec-57ce754ea274
	  Boot ID:                    c982302c-6e81-49de-9ba4-9fad6b0527be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-748nr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-033260-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-752cm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-033260-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-033260-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-fckwn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-033260-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-033260-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  Starting                 20m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  20m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)    kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)    kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)    kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           20m                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           18m                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  NodeNotReady             16m                  node-controller  Node ha-033260-m02 status is now: NodeNotReady
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m7s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m7s)  kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m7s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           104s                 node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           103s                 node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           44s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	
	
	Name:               ha-033260-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:33:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-033260-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 581b37e2b76245bf813ddd1801a6b9a3
	  System UUID:                581b37e2-b762-45bf-813d-dd1801a6b9a3
	  Boot ID:                    0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkczc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-033260-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-4rpgw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      18m
	  kube-system                 kube-apiserver-ha-033260-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-033260-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-fctld                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-ha-033260-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-033260-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 48s                kube-proxy       
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           104s               node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           103s               node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   Starting                 65s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  64s                kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s                kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s                kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 64s                kubelet          Node ha-033260-m03 has been rebooted, boot id: 0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Normal   RegisteredNode           44s                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	
	
	Name:               ha-033260-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:32:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-033260-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f7e5ab5969e49808de6a4938b82b604
	  System UUID:                3f7e5ab5-969e-4980-8de6-a4938b82b604
	  Boot ID:                    5c8fe13a-3363-443e-bb87-2dda804740af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kb2cp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-proxy-cr58q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeReady                17m                kubelet          Node ha-033260-m04 status is now: NodeReady
	  Normal   RegisteredNode           104s               node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           103s               node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeNotReady             64s                node-controller  Node ha-033260-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           44s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-033260-m04 has been rebooted, boot id: 5c8fe13a-3363-443e-bb87-2dda804740af
	  Normal   NodeReady                9s                 kubelet          Node ha-033260-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 11:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051485] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.894871] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.799819] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.926902] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.063947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060890] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	[  +0.189706] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.143881] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.315063] systemd-fstab-generator[1028]: Ignoring "noauto" option for root device
	[  +4.231701] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.066662] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.898522] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.432816] kauditd_printk_skb: 40 callbacks suppressed
	[Sep30 11:31] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199] <==
	{"level":"warn","ts":"2024-09-30T11:32:00.706787Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:00.706927Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:02.199856Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:02.200043Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:05.707026Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:05.707078Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:06.201787Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:06.201855Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-30T11:32:09.282799Z","caller":"traceutil/trace.go:171","msg":"trace[64857037] transaction","detail":"{read_only:false; response_revision:2187; number_of_response:1; }","duration":"116.655315ms","start":"2024-09-30T11:32:09.166129Z","end":"2024-09-30T11:32:09.282785Z","steps":["trace[64857037] 'process raft request'  (duration: 116.522686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:32:10.203832Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:10.203980Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:10.707792Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:10.707849Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-30T11:32:13.224903Z","caller":"traceutil/trace.go:171","msg":"trace[1999434344] transaction","detail":"{read_only:false; response_revision:2202; number_of_response:1; }","duration":"128.3691ms","start":"2024-09-30T11:32:13.096517Z","end":"2024-09-30T11:32:13.224886Z","steps":["trace[1999434344] 'process raft request'  (duration: 128.283973ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:32:14.206454Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:14.206583Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:15.708904Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:15.708958Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-30T11:32:17.251669Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.251722Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.287868Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.302877Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"ff39ee5ac13ccc82","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-30T11:32:17.302984Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.303315Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"ff39ee5ac13ccc82","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-30T11:32:17.303445Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	
	
	==> kernel <==
	 11:33:08 up 2 min,  0 users,  load average: 0.35, 0.29, 0.12
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe] <==
	I0930 11:32:30.501053       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:32:40.499088       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:32:40.499266       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:32:40.499772       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:32:40.499847       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:32:40.500003       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:32:40.500065       1 main.go:299] handling current node
	I0930 11:32:40.500100       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:32:40.500194       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:32:50.507743       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:32:50.507860       1 main.go:299] handling current node
	I0930 11:32:50.507881       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:32:50.507891       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:32:50.508455       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:32:50.508496       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:32:50.508617       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:32:50.508652       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:33:00.499131       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:33:00.499232       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:33:00.499532       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:33:00.499622       1 main.go:299] handling current node
	I0930 11:33:00.499648       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:33:00.499654       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:33:00.499852       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:33:00.499936       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c] <==
	I0930 11:31:21.381575       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0930 11:31:21.538562       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 11:31:21.543182       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:31:21.543721       1 policy_source.go:224] refreshing policies
	I0930 11:31:21.579575       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:31:21.579665       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 11:31:21.580585       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 11:31:21.581145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 11:31:21.581189       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 11:31:21.579601       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 11:31:21.579657       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 11:31:21.581999       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 11:31:21.582037       1 aggregator.go:171] initial CRD sync complete...
	I0930 11:31:21.582044       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 11:31:21.582048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 11:31:21.582053       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:31:21.586437       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0930 11:31:21.607643       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238]
	I0930 11:31:21.609050       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 11:31:21.622457       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0930 11:31:21.631794       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0930 11:31:21.643397       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:31:22.390935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0930 11:31:22.949170       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.249]
	W0930 11:31:42.954664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249 192.168.39.3]
	
	
	==> kube-apiserver [91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1] <==
	I0930 11:30:45.187556       1 options.go:228] external host was not specified, using 192.168.39.249
	I0930 11:30:45.195121       1 server.go:142] Version: v1.31.1
	I0930 11:30:45.195252       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.676469       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 11:30:46.702385       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:30:46.710100       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 11:30:46.716179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 11:30:46.716589       1 instance.go:232] Using reconciler: lease
	W0930 11:31:06.661936       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.662284       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.717971       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0930 11:31:06.718008       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a] <==
	I0930 11:31:33.688278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:31:45.516462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260"
	I0930 11:32:04.057525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:32:04.231728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m03"
	I0930 11:32:04.684993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:04.715867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:05.230878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.005408ms"
	I0930 11:32:05.231081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="127.153µs"
	I0930 11:32:05.557209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:08.338134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.292563ms"
	I0930 11:32:08.339116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.778µs"
	I0930 11:32:08.388088       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.933664ms"
	I0930 11:32:08.388698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.661µs"
	I0930 11:32:08.496117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.972779ms"
	I0930 11:32:08.496306       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="89.205µs"
	I0930 11:32:09.843317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:21.311773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.445217ms"
	I0930 11:32:21.312598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="122.676µs"
	I0930 11:32:24.622549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:24.711638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:34.663019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m03"
	I0930 11:32:59.222283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:59.222671       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:32:59.248647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:59.651723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	
	
	==> kube-controller-manager [9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438] <==
	I0930 11:30:45.993698       1 serving.go:386] Generated self-signed cert in-memory
	I0930 11:30:46.957209       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 11:30:46.957296       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.962662       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 11:30:46.963278       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:30:46.963571       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:30:46.963743       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0930 11:31:21.471526       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:31:29.611028       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:31:29.650081       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0930 11:31:29.650432       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:31:29.730719       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:31:29.730781       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:31:29.730811       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:31:29.734900       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:31:29.735864       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:31:29.735899       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:31:29.738688       1 config.go:199] "Starting service config controller"
	I0930 11:31:29.738986       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:31:29.739407       1 config.go:328] "Starting node config controller"
	I0930 11:31:29.739433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:31:29.739913       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:31:29.743750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:31:29.743822       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:31:29.840409       1 shared_informer.go:320] Caches are synced for node config
	I0930 11:31:29.840462       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40] <==
	E0930 11:31:21.474807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.474916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 11:31:21.474948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 11:31:21.475069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 11:31:21.475172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 11:31:21.475756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0930 11:31:21.475283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 11:31:21.476052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 11:31:21.476242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 11:31:21.476437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 11:31:21.476777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 11:31:21.478491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 11:31:21.478709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.480661       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 11:31:21.480791       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 11:31:23.035263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 11:31:38 ha-033260 kubelet[1140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:31:48 ha-033260 kubelet[1140]: E0930 11:31:48.042221    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695908040568557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:48 ha-033260 kubelet[1140]: E0930 11:31:48.042430    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695908040568557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:58 ha-033260 kubelet[1140]: E0930 11:31:58.049371    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695918048636085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:58 ha-033260 kubelet[1140]: E0930 11:31:58.049438    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695918048636085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:59 ha-033260 kubelet[1140]: I0930 11:31:59.442892    1140 scope.go:117] "RemoveContainer" containerID="f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519"
	Sep 30 11:32:08 ha-033260 kubelet[1140]: E0930 11:32:08.056080    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695928055647323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:08 ha-033260 kubelet[1140]: E0930 11:32:08.056129    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695928055647323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:18 ha-033260 kubelet[1140]: E0930 11:32:18.057838    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695938057305846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:18 ha-033260 kubelet[1140]: E0930 11:32:18.058200    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695938057305846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:28 ha-033260 kubelet[1140]: E0930 11:32:28.061267    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695948060750230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:28 ha-033260 kubelet[1140]: E0930 11:32:28.061868    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695948060750230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:38 ha-033260 kubelet[1140]: E0930 11:32:38.065557    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695958063255462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:38 ha-033260 kubelet[1140]: E0930 11:32:38.065622    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695958063255462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:38 ha-033260 kubelet[1140]: E0930 11:32:38.068753    1140 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:32:38 ha-033260 kubelet[1140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:32:38 ha-033260 kubelet[1140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:32:38 ha-033260 kubelet[1140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:32:38 ha-033260 kubelet[1140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:32:48 ha-033260 kubelet[1140]: E0930 11:32:48.067027    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695968066700890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:48 ha-033260 kubelet[1140]: E0930 11:32:48.067068    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695968066700890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:58 ha-033260 kubelet[1140]: E0930 11:32:58.071041    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695978069834586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:58 ha-033260 kubelet[1140]: E0930 11:32:58.071099    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695978069834586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:08 ha-033260 kubelet[1140]: E0930 11:33:08.077457    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988076772880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:08 ha-033260 kubelet[1140]: E0930 11:33:08.077518    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988076772880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:261: (dbg) Run:  kubectl --context ha-033260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (466.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-033260" in json of 'profile list' to have "Degraded" status but have "OKHAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-033260\",\"Status\":\"OKHAppy\",\"Config\":{\"Name\":\"ha-033260\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-033260\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.238\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.104\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":fals
e,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSi
ze\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.695546746s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260 -v=7                                                           | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-033260 -v=7                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	| node    | ha-033260 node delete m03 -v=7                                                   | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-033260 stop -v=7                                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true                                                         | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:25 UTC | 30 Sep 24 11:33 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:25:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:25:23.307171   34720 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:25:23.307438   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307448   34720 out.go:358] Setting ErrFile to fd 2...
	I0930 11:25:23.307454   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307638   34720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:25:23.308189   34720 out.go:352] Setting JSON to false
	I0930 11:25:23.309088   34720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4070,"bootTime":1727691453,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:25:23.309188   34720 start.go:139] virtualization: kvm guest
	I0930 11:25:23.312163   34720 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:25:23.313387   34720 notify.go:220] Checking for updates...
	I0930 11:25:23.313393   34720 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:25:23.314778   34720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:25:23.316338   34720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:25:23.317962   34720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:25:23.319385   34720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:25:23.320813   34720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:25:23.322948   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:25:23.323340   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.323412   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.338759   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41721
	I0930 11:25:23.339192   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.339786   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.339807   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.340136   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.340346   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.340572   34720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:25:23.340857   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.340891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.355777   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
	I0930 11:25:23.356254   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.356744   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.356763   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.357120   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.357292   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.393653   34720 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:25:23.394968   34720 start.go:297] selected driver: kvm2
	I0930 11:25:23.394986   34720 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false
efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.395148   34720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:25:23.395486   34720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.395574   34720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:25:23.411100   34720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:25:23.411834   34720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:25:23.411865   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:25:23.411907   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:25:23.411964   34720 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.412098   34720 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.413851   34720 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:25:23.415381   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:25:23.415422   34720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:25:23.415429   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:25:23.415534   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:25:23.415546   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:25:23.415667   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:25:23.415859   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:25:23.415901   34720 start.go:364] duration metric: took 23.767µs to acquireMachinesLock for "ha-033260"
	I0930 11:25:23.415913   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:25:23.415920   34720 fix.go:54] fixHost starting: 
	I0930 11:25:23.416165   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.416196   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.430823   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I0930 11:25:23.431277   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.431704   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.431723   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.432018   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.432228   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.432375   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:25:23.433975   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:25:23.434007   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:25:23.436150   34720 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:25:23.437473   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:25:23.437494   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.437753   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:25:23.440392   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.440831   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:25:23.440858   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.441041   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:25:23.441214   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441380   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441502   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:25:23.441655   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:25:23.441833   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:25:23.441844   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:25:26.337999   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:29.409914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:35.489955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:38.561928   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:44.641887   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:47.713916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:53.793988   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:56.865946   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:10.017864   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:16.097850   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:19.169940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:25.249934   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:28.321888   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:34.401910   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:37.473948   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:43.553872   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:46.625911   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:52.705908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:55.777884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:01.857921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:04.929922   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:11.009956   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:14.081936   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:20.161884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:23.233917   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:29.313903   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:32.385985   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:38.465815   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:41.537920   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:47.617898   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:50.689890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:56.769908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:59.841901   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:05.921893   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:08.993941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:15.073913   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:18.145943   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:24.225916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:27.297994   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:33.377803   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:36.449892   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:42.529904   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:45.601915   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:51.681921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:54.753890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:00.833932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:03.905924   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:09.985909   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:13.057955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:19.137932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:22.209941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:28.289972   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:31.361973   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:37.441940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:40.513906   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:46.593938   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:49.665931   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:55.745914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:58.817932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:04.897939   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:07.900098   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:07.900146   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900476   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:07.900498   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900690   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:07.902604   34720 machine.go:96] duration metric: took 4m44.465113929s to provisionDockerMachine
	I0930 11:30:07.902642   34720 fix.go:56] duration metric: took 4m44.486721557s for fixHost
	I0930 11:30:07.902649   34720 start.go:83] releasing machines lock for "ha-033260", held for 4m44.486740655s
	W0930 11:30:07.902664   34720 start.go:714] error starting host: provision: host is not running
	W0930 11:30:07.902739   34720 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 11:30:07.902751   34720 start.go:729] Will try again in 5 seconds ...
	I0930 11:30:12.906532   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:12.906673   34720 start.go:364] duration metric: took 71.92µs to acquireMachinesLock for "ha-033260"
	I0930 11:30:12.906700   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:12.906710   34720 fix.go:54] fixHost starting: 
	I0930 11:30:12.906980   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:12.907012   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:12.922017   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0930 11:30:12.922407   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:12.922875   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:12.922898   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:12.923192   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:12.923373   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:12.923532   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:30:12.925123   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Stopped err=<nil>
	I0930 11:30:12.925146   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	W0930 11:30:12.925301   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:12.927074   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260" ...
	I0930 11:30:12.928250   34720 main.go:141] libmachine: (ha-033260) Calling .Start
	I0930 11:30:12.928414   34720 main.go:141] libmachine: (ha-033260) Ensuring networks are active...
	I0930 11:30:12.929185   34720 main.go:141] libmachine: (ha-033260) Ensuring network default is active
	I0930 11:30:12.929536   34720 main.go:141] libmachine: (ha-033260) Ensuring network mk-ha-033260 is active
	I0930 11:30:12.929877   34720 main.go:141] libmachine: (ha-033260) Getting domain xml...
	I0930 11:30:12.930569   34720 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:30:14.153271   34720 main.go:141] libmachine: (ha-033260) Waiting to get IP...
	I0930 11:30:14.154287   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.154676   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.154756   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.154665   35728 retry.go:31] will retry after 246.651231ms: waiting for machine to come up
	I0930 11:30:14.403231   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.403674   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.403727   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.403659   35728 retry.go:31] will retry after 262.960523ms: waiting for machine to come up
	I0930 11:30:14.668247   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.668711   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.668739   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.668675   35728 retry.go:31] will retry after 381.564783ms: waiting for machine to come up
	I0930 11:30:15.052320   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.052821   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.052846   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.052760   35728 retry.go:31] will retry after 588.393032ms: waiting for machine to come up
	I0930 11:30:15.642361   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.642772   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.642801   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.642723   35728 retry.go:31] will retry after 588.302425ms: waiting for machine to come up
	I0930 11:30:16.232721   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:16.233152   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:16.233171   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:16.233111   35728 retry.go:31] will retry after 770.742378ms: waiting for machine to come up
	I0930 11:30:17.005248   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:17.005687   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:17.005718   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:17.005645   35728 retry.go:31] will retry after 1.118737809s: waiting for machine to come up
	I0930 11:30:18.126316   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:18.126728   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:18.126755   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:18.126678   35728 retry.go:31] will retry after 1.317343847s: waiting for machine to come up
	I0930 11:30:19.446227   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:19.446785   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:19.446810   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:19.446709   35728 retry.go:31] will retry after 1.309700527s: waiting for machine to come up
	I0930 11:30:20.758241   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:20.758680   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:20.758702   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:20.758651   35728 retry.go:31] will retry after 1.521862953s: waiting for machine to come up
	I0930 11:30:22.282731   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:22.283205   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:22.283242   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:22.283159   35728 retry.go:31] will retry after 2.906878377s: waiting for machine to come up
	I0930 11:30:25.192687   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:25.193133   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:25.193170   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:25.193111   35728 retry.go:31] will retry after 2.807596314s: waiting for machine to come up
	I0930 11:30:28.002489   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:28.002972   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:28.003005   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:28.002951   35728 retry.go:31] will retry after 2.762675727s: waiting for machine to come up
	I0930 11:30:30.769002   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.769600   34720 main.go:141] libmachine: (ha-033260) Found IP for machine: 192.168.39.249
	I0930 11:30:30.769647   34720 main.go:141] libmachine: (ha-033260) Reserving static IP address...
	I0930 11:30:30.769660   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has current primary IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.770061   34720 main.go:141] libmachine: (ha-033260) Reserved static IP address: 192.168.39.249
	I0930 11:30:30.770097   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.770113   34720 main.go:141] libmachine: (ha-033260) Waiting for SSH to be available...
	I0930 11:30:30.770138   34720 main.go:141] libmachine: (ha-033260) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"}
	I0930 11:30:30.770150   34720 main.go:141] libmachine: (ha-033260) DBG | Getting to WaitForSSH function...
	I0930 11:30:30.772370   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.772760   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772873   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH client type: external
	I0930 11:30:30.772897   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa (-rw-------)
	I0930 11:30:30.772957   34720 main.go:141] libmachine: (ha-033260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:30.772978   34720 main.go:141] libmachine: (ha-033260) DBG | About to run SSH command:
	I0930 11:30:30.772991   34720 main.go:141] libmachine: (ha-033260) DBG | exit 0
	I0930 11:30:30.902261   34720 main.go:141] libmachine: (ha-033260) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:30.902682   34720 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:30:30.903345   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:30.905986   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906435   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.906466   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906792   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:30.907003   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:30.907027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:30.907234   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:30.909474   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.909877   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.909908   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.910031   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:30.910192   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910303   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910430   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:30.910552   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:30.910754   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:30.910767   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:31.026522   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:31.026555   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.026772   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:31.026799   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.027007   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.029600   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.029965   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.029992   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.030147   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.030327   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030457   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030592   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.030726   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.030900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.030913   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:30:31.158417   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:30:31.158470   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.161439   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.161861   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.161898   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.162135   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.162317   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162476   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162595   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.162742   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.162897   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.162912   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:31.283806   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:31.283837   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:31.283864   34720 buildroot.go:174] setting up certificates
	I0930 11:30:31.283877   34720 provision.go:84] configureAuth start
	I0930 11:30:31.283888   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.284156   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:31.287095   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287561   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.287586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287860   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.290260   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290610   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.290638   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290768   34720 provision.go:143] copyHostCerts
	I0930 11:30:31.290802   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290847   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:31.290855   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290923   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:31.291012   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291029   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:31.291036   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291062   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:31.291116   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291138   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:31.291144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291169   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:31.291235   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:30:31.357378   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:31.357434   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:31.357461   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.360265   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360612   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.360639   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360895   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.361087   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.361219   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.361344   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.448948   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:31.449019   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:31.478937   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:31.479012   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:30:31.509585   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:31.509668   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:31.539539   34720 provision.go:87] duration metric: took 255.649967ms to configureAuth
	I0930 11:30:31.539565   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:31.539759   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:31.539826   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.542626   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543038   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.543072   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543261   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.543501   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543644   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543761   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.543949   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.544136   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.544151   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:31.800600   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:31.800624   34720 machine.go:96] duration metric: took 893.601125ms to provisionDockerMachine
	I0930 11:30:31.800638   34720 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:30:31.800650   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:31.800670   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.801007   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:31.801030   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.803813   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804193   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.804222   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804441   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.804604   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.804769   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.804939   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.893164   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:31.898324   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:31.898349   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:31.898488   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:31.898642   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:31.898657   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:31.898771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:31.909611   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:31.940213   34720 start.go:296] duration metric: took 139.562436ms for postStartSetup
	I0930 11:30:31.940253   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.940567   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:31.940600   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.943464   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.943880   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.943909   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.944048   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.944346   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.944569   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.944768   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.028986   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:32.029069   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:32.087362   34720 fix.go:56] duration metric: took 19.180639105s for fixHost
	I0930 11:30:32.087405   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.090539   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.090962   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.090988   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.091151   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.091371   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091585   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091707   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.091851   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:32.092025   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:32.092044   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:32.206950   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695832.171402259
	
	I0930 11:30:32.206975   34720 fix.go:216] guest clock: 1727695832.171402259
	I0930 11:30:32.206982   34720 fix.go:229] Guest: 2024-09-30 11:30:32.171402259 +0000 UTC Remote: 2024-09-30 11:30:32.087388641 +0000 UTC m=+308.814519334 (delta=84.013618ms)
	I0930 11:30:32.207008   34720 fix.go:200] guest clock delta is within tolerance: 84.013618ms
	I0930 11:30:32.207014   34720 start.go:83] releasing machines lock for "ha-033260", held for 19.300329364s
	I0930 11:30:32.207037   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.207322   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:32.209968   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210394   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.210419   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210638   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211106   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211267   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211375   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:32.211419   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.211462   34720 ssh_runner.go:195] Run: cat /version.json
	I0930 11:30:32.211487   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.213826   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214176   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214200   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214221   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214463   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.214607   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.214713   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.214734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214757   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214877   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.214902   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.215061   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.215198   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.215320   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.318873   34720 ssh_runner.go:195] Run: systemctl --version
	I0930 11:30:32.325516   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:32.483433   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:32.489924   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:32.489999   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:32.509691   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:32.509716   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:32.509773   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:32.529220   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:32.544880   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:32.544953   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:32.561347   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:32.576185   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:32.696192   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:32.856000   34720 docker.go:233] disabling docker service ...
	I0930 11:30:32.856061   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:32.872115   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:32.886462   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:33.019718   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:33.149810   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:33.165943   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:33.188911   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:33.188984   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.202121   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:33.202191   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.214960   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.227336   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.239366   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:33.251818   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.264121   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.285246   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.297242   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:30:33.307951   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:30:33.308020   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:30:33.324031   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:30:33.335459   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:33.464418   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:30:33.563219   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:30:33.563313   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:30:33.568915   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:30:33.568982   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:30:33.575600   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:30:33.617027   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:30:33.617123   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.651093   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.682607   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:30:33.684108   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:33.687198   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687568   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:33.687586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687860   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:30:33.692422   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:33.706358   34720 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:30:33.706513   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:33.706553   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:33.741648   34720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 11:30:33.741712   34720 ssh_runner.go:195] Run: which lz4
	I0930 11:30:33.746514   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 11:30:33.746605   34720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:30:33.751033   34720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:30:33.751094   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 11:30:35.211096   34720 crio.go:462] duration metric: took 1.464517464s to copy over tarball
	I0930 11:30:35.211178   34720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:30:37.290495   34720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.079293521s)
	I0930 11:30:37.290519   34720 crio.go:469] duration metric: took 2.079396835s to extract the tarball
	I0930 11:30:37.290526   34720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:30:37.328103   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:37.375779   34720 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:30:37.375803   34720 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:30:37.375810   34720 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:30:37.375919   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:30:37.376009   34720 ssh_runner.go:195] Run: crio config
	I0930 11:30:37.430483   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:30:37.430505   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:30:37.430513   34720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:30:37.430534   34720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:30:37.430658   34720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:30:37.430678   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:30:37.430719   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:30:37.447824   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:30:37.447927   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:30:37.447977   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:30:37.458530   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:30:37.458608   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:30:37.469126   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:30:37.487666   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:30:37.505980   34720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:30:37.524942   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:30:37.543099   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:30:37.547174   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:37.560565   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:37.703633   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:30:37.722433   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:30:37.722455   34720 certs.go:194] generating shared ca certs ...
	I0930 11:30:37.722471   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:37.722631   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:30:37.722669   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:30:37.722678   34720 certs.go:256] generating profile certs ...
	I0930 11:30:37.722756   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:30:37.722813   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:30:37.722850   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:30:37.722861   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:30:37.722873   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:30:37.722886   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:30:37.722898   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:30:37.722909   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:30:37.722931   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:30:37.722944   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:30:37.722956   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:30:37.723015   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:30:37.723047   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:30:37.723058   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:30:37.723082   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:30:37.723107   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:30:37.723127   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:30:37.723167   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:37.723194   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:37.723207   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:30:37.723219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:30:37.723778   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:30:37.765086   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:30:37.796973   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:30:37.825059   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:30:37.855521   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:30:37.899131   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:30:37.930900   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:30:37.980558   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:30:38.038804   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:30:38.087704   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:30:38.115070   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:30:38.143055   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:30:38.165228   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:30:38.181120   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:30:38.193472   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199554   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199622   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.206544   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:30:38.218674   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:30:38.230696   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235800   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235869   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.242027   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:30:38.253962   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:30:38.265695   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270860   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270930   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.277134   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:30:38.288946   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:30:38.294078   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:30:38.300823   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:30:38.307442   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:30:38.314085   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:30:38.320482   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:30:38.327174   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:30:38.333995   34720 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:30:38.334150   34720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:30:38.334251   34720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:30:38.372351   34720 cri.go:89] found id: ""
	I0930 11:30:38.372413   34720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:30:38.383026   34720 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 11:30:38.383043   34720 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 11:30:38.383100   34720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 11:30:38.394015   34720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:30:38.394528   34720 kubeconfig.go:125] found "ha-033260" server: "https://192.168.39.254:8443"
	I0930 11:30:38.394558   34720 kubeconfig.go:47] verify endpoint returned: got: 192.168.39.254:8443, want: 192.168.39.249:8443
	I0930 11:30:38.394772   34720 kubeconfig.go:62] /home/jenkins/minikube-integration/19734-3842/kubeconfig needs updating (will repair): [kubeconfig needs server address update]
	I0930 11:30:38.395022   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.395487   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.395704   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:30:38.396149   34720 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 11:30:38.396377   34720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 11:30:38.407784   34720 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0930 11:30:38.407813   34720 kubeadm.go:597] duration metric: took 24.764144ms to restartPrimaryControlPlane
	I0930 11:30:38.407821   34720 kubeadm.go:394] duration metric: took 73.840194ms to StartCluster
	I0930 11:30:38.407838   34720 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.407924   34720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.408750   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.409039   34720 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:30:38.409099   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:30:38.409119   34720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:30:38.409305   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.411175   34720 out.go:177] * Enabled addons: 
	I0930 11:30:38.412776   34720 addons.go:510] duration metric: took 3.663147ms for enable addons: enabled=[]
	I0930 11:30:38.412820   34720 start.go:246] waiting for cluster config update ...
	I0930 11:30:38.412828   34720 start.go:255] writing updated cluster config ...
	I0930 11:30:38.414670   34720 out.go:201] 
	I0930 11:30:38.416408   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.416501   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.418474   34720 out.go:177] * Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	I0930 11:30:38.419875   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:38.419902   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:30:38.420019   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:30:38.420031   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:30:38.420138   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.420331   34720 start.go:360] acquireMachinesLock for ha-033260-m02: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:38.420373   34720 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "ha-033260-m02"
	I0930 11:30:38.420384   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:38.420389   34720 fix.go:54] fixHost starting: m02
	I0930 11:30:38.420682   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:38.420704   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:38.436048   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0930 11:30:38.436591   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:38.437106   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:38.437129   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:38.437434   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:38.437608   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:38.437762   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:30:38.439609   34720 fix.go:112] recreateIfNeeded on ha-033260-m02: state=Stopped err=<nil>
	I0930 11:30:38.439637   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	W0930 11:30:38.439785   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:38.443504   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m02" ...
	I0930 11:30:38.445135   34720 main.go:141] libmachine: (ha-033260-m02) Calling .Start
	I0930 11:30:38.445476   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring networks are active...
	I0930 11:30:38.446588   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network default is active
	I0930 11:30:38.447039   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network mk-ha-033260 is active
	I0930 11:30:38.447376   34720 main.go:141] libmachine: (ha-033260-m02) Getting domain xml...
	I0930 11:30:38.448426   34720 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:30:39.710879   34720 main.go:141] libmachine: (ha-033260-m02) Waiting to get IP...
	I0930 11:30:39.711874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.712365   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.712441   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.712367   35943 retry.go:31] will retry after 217.001727ms: waiting for machine to come up
	I0930 11:30:39.931176   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.931746   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.931795   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.931690   35943 retry.go:31] will retry after 360.379717ms: waiting for machine to come up
	I0930 11:30:40.293305   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.293927   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.293956   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.293884   35943 retry.go:31] will retry after 440.189289ms: waiting for machine to come up
	I0930 11:30:40.735666   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.736141   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.736171   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.736077   35943 retry.go:31] will retry after 458.690004ms: waiting for machine to come up
	I0930 11:30:41.196951   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.197392   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.197421   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.197336   35943 retry.go:31] will retry after 554.052986ms: waiting for machine to come up
	I0930 11:30:41.753199   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.753680   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.753707   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.753643   35943 retry.go:31] will retry after 931.699083ms: waiting for machine to come up
	I0930 11:30:42.686931   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:42.687320   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:42.687351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:42.687256   35943 retry.go:31] will retry after 1.166098452s: waiting for machine to come up
	I0930 11:30:43.855595   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:43.856179   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:43.856196   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:43.856132   35943 retry.go:31] will retry after 902.212274ms: waiting for machine to come up
	I0930 11:30:44.759588   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:44.760139   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:44.760169   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:44.760094   35943 retry.go:31] will retry after 1.732785907s: waiting for machine to come up
	I0930 11:30:46.495220   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:46.495722   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:46.495751   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:46.495670   35943 retry.go:31] will retry after 1.455893126s: waiting for machine to come up
	I0930 11:30:47.952835   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:47.953164   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:47.953186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:47.953117   35943 retry.go:31] will retry after 1.846394006s: waiting for machine to come up
	I0930 11:30:49.801836   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:49.802224   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:49.802255   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:49.802148   35943 retry.go:31] will retry after 3.334677314s: waiting for machine to come up
	I0930 11:30:53.140758   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:53.141162   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:53.141198   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:53.141142   35943 retry.go:31] will retry after 4.392553354s: waiting for machine to come up
	I0930 11:30:57.535667   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536094   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536115   34720 main.go:141] libmachine: (ha-033260-m02) Found IP for machine: 192.168.39.3
	I0930 11:30:57.536128   34720 main.go:141] libmachine: (ha-033260-m02) Reserving static IP address...
	I0930 11:30:57.536667   34720 main.go:141] libmachine: (ha-033260-m02) Reserved static IP address: 192.168.39.3
	I0930 11:30:57.536690   34720 main.go:141] libmachine: (ha-033260-m02) Waiting for SSH to be available...
	I0930 11:30:57.536702   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.536717   34720 main.go:141] libmachine: (ha-033260-m02) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"}
	I0930 11:30:57.536733   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Getting to WaitForSSH function...
	I0930 11:30:57.538801   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539092   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.539118   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539287   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH client type: external
	I0930 11:30:57.539307   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa (-rw-------)
	I0930 11:30:57.539337   34720 main.go:141] libmachine: (ha-033260-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:57.539351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | About to run SSH command:
	I0930 11:30:57.539361   34720 main.go:141] libmachine: (ha-033260-m02) DBG | exit 0
	I0930 11:30:57.665932   34720 main.go:141] libmachine: (ha-033260-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:57.666273   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:30:57.666869   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:57.669186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669581   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.669611   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669933   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:57.670195   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:57.670214   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:57.670410   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.672489   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.672840   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.672867   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.673009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.673202   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673389   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673514   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.673661   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.673838   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.673848   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:57.786110   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:57.786133   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786377   34720 buildroot.go:166] provisioning hostname "ha-033260-m02"
	I0930 11:30:57.786400   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786574   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.789039   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789439   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.789465   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789633   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.789794   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.789948   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.790053   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.790195   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.790374   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.790385   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m02 && echo "ha-033260-m02" | sudo tee /etc/hostname
	I0930 11:30:57.917415   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m02
	
	I0930 11:30:57.917438   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.920154   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920496   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.920529   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920721   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.920892   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921046   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921171   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.921311   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.921493   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.921509   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:58.045391   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:58.045417   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:58.045437   34720 buildroot.go:174] setting up certificates
	I0930 11:30:58.045462   34720 provision.go:84] configureAuth start
	I0930 11:30:58.045479   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:58.045758   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.048321   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048721   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.048743   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048920   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.051229   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051564   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.051591   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051758   34720 provision.go:143] copyHostCerts
	I0930 11:30:58.051783   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051822   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:58.051830   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051885   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:58.051973   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.051994   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:58.051999   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.052023   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:58.052120   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052140   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:58.052144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:58.052236   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m02 san=[127.0.0.1 192.168.39.3 ha-033260-m02 localhost minikube]
	I0930 11:30:58.137309   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:58.137363   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:58.137388   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.139915   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140158   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.140185   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140386   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.140552   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.140695   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.140798   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.228976   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:58.229076   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:58.254635   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:58.254717   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:30:58.279904   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:58.279982   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:58.305451   34720 provision.go:87] duration metric: took 259.975115ms to configureAuth
	I0930 11:30:58.305480   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:58.305758   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:58.305834   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.308335   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.308803   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.308825   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.309009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.309198   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309332   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309439   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.309633   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.309804   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.309818   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:58.549247   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:58.549271   34720 machine.go:96] duration metric: took 879.062425ms to provisionDockerMachine
	I0930 11:30:58.549282   34720 start.go:293] postStartSetup for "ha-033260-m02" (driver="kvm2")
	I0930 11:30:58.549291   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:58.549307   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.549711   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:58.549753   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.552476   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.552924   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.552952   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.553077   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.553265   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.553440   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.553591   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.641113   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:58.645683   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:58.645710   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:58.645780   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:58.645871   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:58.645881   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:58.645976   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:58.656118   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:58.683428   34720 start.go:296] duration metric: took 134.134961ms for postStartSetup
	I0930 11:30:58.683471   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.683772   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:58.683796   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.686150   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686552   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.686580   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686712   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.686921   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.687033   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.687137   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.772957   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:58.773054   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:58.831207   34720 fix.go:56] duration metric: took 20.410809297s for fixHost
	I0930 11:30:58.831256   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.834153   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834531   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.834561   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834754   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.834963   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835129   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835280   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.835497   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.835715   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.835747   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:58.950852   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695858.923209005
	
	I0930 11:30:58.950874   34720 fix.go:216] guest clock: 1727695858.923209005
	I0930 11:30:58.950882   34720 fix.go:229] Guest: 2024-09-30 11:30:58.923209005 +0000 UTC Remote: 2024-09-30 11:30:58.831234705 +0000 UTC m=+335.558365405 (delta=91.9743ms)
	I0930 11:30:58.950897   34720 fix.go:200] guest clock delta is within tolerance: 91.9743ms
	I0930 11:30:58.950902   34720 start.go:83] releasing machines lock for "ha-033260-m02", held for 20.530522823s
	I0930 11:30:58.950922   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.951203   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.954037   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.954470   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.954495   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.956428   34720 out.go:177] * Found network options:
	I0930 11:30:58.958147   34720 out.go:177]   - NO_PROXY=192.168.39.249
	W0930 11:30:58.959662   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.959685   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960216   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960383   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960470   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:58.960516   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	W0930 11:30:58.960557   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.960638   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:58.960661   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.963506   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963693   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.963901   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964044   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964186   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964190   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.964217   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964364   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964379   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964505   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.964524   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964643   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964756   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:59.185932   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:59.192578   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:59.192645   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:59.212639   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:59.212663   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:59.212730   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:59.233596   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:59.248239   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:59.248310   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:59.262501   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:59.277031   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:59.408627   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:59.575087   34720 docker.go:233] disabling docker service ...
	I0930 11:30:59.575157   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:59.590510   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:59.605151   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:59.739478   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:59.876906   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:59.891632   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:59.911543   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:59.911601   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.923050   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:59.923114   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.934427   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.945682   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.957111   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:59.968813   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.980975   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.999767   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:31:00.011463   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:31:00.021740   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:31:00.021804   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:31:00.036575   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:31:00.046724   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:00.166031   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:31:00.263048   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:31:00.263104   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:31:00.268250   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:31:00.268319   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:31:00.272426   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:31:00.321494   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:31:00.321561   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.350506   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.381505   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:31:00.383057   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:31:00.384433   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:31:00.387430   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.387871   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:31:00.387903   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.388092   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:31:00.392819   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:00.406199   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:31:00.406474   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:00.406842   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.406891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.421565   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0930 11:31:00.422022   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.422477   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.422501   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.422814   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.423031   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:31:00.424747   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:31:00.425025   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.425059   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.439760   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0930 11:31:00.440237   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.440699   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.440716   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.441029   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.441215   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:31:00.441357   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.3
	I0930 11:31:00.441367   34720 certs.go:194] generating shared ca certs ...
	I0930 11:31:00.441380   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.441501   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:31:00.441541   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:31:00.441555   34720 certs.go:256] generating profile certs ...
	I0930 11:31:00.441653   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:31:00.441679   34720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173
	I0930 11:31:00.441696   34720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:31:00.711479   34720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 ...
	I0930 11:31:00.711512   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173: {Name:mk8969b2efcc5de06d527c6abe25d7f8f8bfba86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711706   34720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 ...
	I0930 11:31:00.711723   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173: {Name:mkcb971c29eb187169c6672af3a12c14dd523134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711815   34720 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:31:00.711977   34720 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:31:00.712110   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:31:00.712126   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:31:00.712141   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:31:00.712175   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:31:00.712192   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:31:00.712204   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:31:00.712217   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:31:00.712228   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:31:00.712238   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:31:00.712287   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:31:00.712314   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:31:00.712324   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:31:00.712348   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:31:00.712369   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:31:00.712408   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:31:00.712446   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:31:00.712473   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:31:00.712487   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:00.712499   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:31:00.712528   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:31:00.715756   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716154   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:31:00.716181   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716374   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:31:00.716558   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:31:00.716720   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:31:00.716893   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:31:00.794084   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:31:00.799675   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:31:00.812361   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:31:00.817141   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:31:00.828855   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:31:00.833566   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:31:00.844934   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:31:00.849462   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:31:00.860080   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:31:00.864183   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:31:00.875695   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:31:00.880202   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:31:00.891130   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:31:00.918693   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:31:00.944303   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:31:00.969526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:31:00.996710   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:31:01.023015   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:31:01.050381   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:31:01.076757   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:31:01.103526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:31:01.129114   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:31:01.155177   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:31:01.180954   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:31:01.199391   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:31:01.218184   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:31:01.238266   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:31:01.258183   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:31:01.276632   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:31:01.294303   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:31:01.312244   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:31:01.318735   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:31:01.330839   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.335928   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.336000   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.342463   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:31:01.353941   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:31:01.365658   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370653   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370714   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.376795   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:31:01.388155   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:31:01.399831   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404901   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404967   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.411138   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:31:01.422294   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:31:01.426988   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:31:01.433816   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:31:01.440682   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:31:01.447200   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:31:01.454055   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:31:01.460508   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:31:01.466735   34720 kubeadm.go:934] updating node {m02 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 11:31:01.466882   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:31:01.466926   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:31:01.466986   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:31:01.485425   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:31:01.485500   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:31:01.485555   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:31:01.495844   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:31:01.495903   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:31:01.505526   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0930 11:31:01.523077   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:31:01.540915   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:31:01.558204   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:31:01.562410   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:01.575484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.701502   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.719655   34720 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:31:01.719937   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:01.723162   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:31:01.724484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.910906   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.933340   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:31:01.933718   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:31:01.933803   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:31:01.934081   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:01.934248   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:01.934259   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:01.934274   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:01.934285   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:06.735523   34720 round_trippers.go:574] Response Status:  in 4801 milliseconds
	I0930 11:31:07.735873   34720 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735937   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735944   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:07.735954   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:07.735960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:17.737130   34720 round_trippers.go:574] Response Status:  in 10001 milliseconds
	I0930 11:31:17.737228   34720 node_ready.go:53] error getting node "ha-033260-m02": Get "https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.39.1:51024->192.168.39.249:8443: read: connection reset by peer
	I0930 11:31:17.737312   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:17.737324   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:17.737335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:17.737343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.500223   34720 round_trippers.go:574] Response Status: 200 OK in 3762 milliseconds
	I0930 11:31:21.501292   34720 node_ready.go:53] node "ha-033260-m02" has status "Ready":"Unknown"
	I0930 11:31:21.501373   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.501386   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.501397   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.501404   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.519310   34720 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0930 11:31:21.934926   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.934946   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.934956   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.934960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.940164   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:22.434503   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.434527   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.434544   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.434553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.438661   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:22.934869   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.934914   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.934923   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.934927   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.937891   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.435280   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.435301   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.435309   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.435314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.441790   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.444141   34720 node_ready.go:49] node "ha-033260-m02" has status "Ready":"True"
	I0930 11:31:23.444180   34720 node_ready.go:38] duration metric: took 21.510052339s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:23.444195   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:23.444252   34720 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:31:23.444273   34720 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:31:23.444364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:23.444380   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.444392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.444401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.454505   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:23.465935   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.466047   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:31:23.466061   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.466072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.466081   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.474857   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:23.475614   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.475635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.475647   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.475654   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.478510   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.479069   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.479097   34720 pod_ready.go:82] duration metric: took 13.131126ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479109   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479186   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:31:23.479199   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.479208   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.479213   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.485985   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.486909   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.486931   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.486941   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.486947   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490284   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.490832   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.490853   34720 pod_ready.go:82] duration metric: took 11.73655ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490864   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:31:23.490962   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.490972   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.498681   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:23.499421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.499443   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.499460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.499466   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.503369   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.503948   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.503974   34720 pod_ready.go:82] duration metric: took 13.102363ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.503986   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.504068   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:23.504080   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.504090   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.504097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.510528   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.511092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.511107   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.511115   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.511122   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.515703   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:24.004536   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.004560   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.004580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.004588   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.008341   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.009009   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.009023   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.009030   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.009038   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.011924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:24.504942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.504982   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.504991   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.504996   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.508600   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.509408   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.509428   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.509437   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.509441   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.512140   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.005082   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.005104   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.005112   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.005115   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.008608   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:25.009145   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.009159   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.009166   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.009172   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.012052   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.505333   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.505422   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.505445   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.505470   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.544680   34720 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0930 11:31:25.545744   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.545758   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.545766   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.545771   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.559955   34720 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0930 11:31:25.560548   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:26.004848   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.004869   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.004877   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.004881   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.008562   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.009380   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.009397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.009407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.009413   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.012491   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.504290   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.504315   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.504327   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.504335   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.508059   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.508795   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.508813   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.508823   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.508828   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.512273   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.004525   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.004546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.004555   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.004560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009158   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:27.009942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.009959   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.009967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.013093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.505035   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.505082   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.505093   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.505100   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.508864   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.509652   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.509670   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.509681   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.509687   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.512440   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:28.005011   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.005040   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.005051   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.005058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.013343   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:28.014728   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.014745   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.014754   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.014758   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.036177   34720 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0930 11:31:28.037424   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:28.504206   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.504241   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.504249   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.504254   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.511361   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:28.512356   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.512373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.512383   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.512389   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.525172   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:31:29.005163   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.005184   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.005195   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.005200   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.010684   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.011486   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.011516   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.011528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.011535   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.017470   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.505132   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.505152   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.505162   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.505168   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.518955   34720 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0930 11:31:29.519584   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.519602   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.519612   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.519619   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.530475   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:30.004860   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.004881   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.004889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.004893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.008564   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.009192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.009207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.009215   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.009220   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.013399   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.504171   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.504195   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.504205   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.504210   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.507972   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.509257   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.509275   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.509283   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.509286   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.513975   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.514510   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:31.004737   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.004765   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.004775   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.004780   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010196   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:31.010880   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.010900   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.010912   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010919   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.014567   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:31.504379   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.504397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.504405   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.504409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.511899   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:31.513088   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.513111   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.513122   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.513128   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.516398   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.005079   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.005119   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.005131   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.005138   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.009300   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:32.010097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.010118   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.010130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.010137   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.013237   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.505168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.505192   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.505203   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.505209   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.509155   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.509935   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.509953   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.509960   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.509964   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.513296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:33.004767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.004802   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.004812   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.004818   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.009316   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:33.009983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.009997   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.010005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.010018   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.012955   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:33.013498   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:33.504397   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.504432   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.504443   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.504450   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.620464   34720 round_trippers.go:574] Response Status: 200 OK in 115 milliseconds
	I0930 11:31:33.621445   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.621467   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.621479   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.621486   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.624318   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.004311   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:34.004332   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.004341   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.004346   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.008601   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.009530   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.009546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.009553   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.009556   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.013047   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.013767   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.013788   34720 pod_ready.go:82] duration metric: took 10.509794387s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013800   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013877   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:31:34.013888   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.013899   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.013908   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.021427   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:34.022374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.022393   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.022405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.022412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.026491   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.027124   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.027154   34720 pod_ready.go:82] duration metric: took 13.341195ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027184   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027276   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:31:34.027289   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.027300   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.027306   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.031483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.032050   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.032064   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.032072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.032075   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.035296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.035760   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.035779   34720 pod_ready.go:82] duration metric: took 8.586877ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035787   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035853   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:31:34.035863   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.035870   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.035874   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.040970   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.041904   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.041918   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.041926   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.041929   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.046986   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.047525   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.047542   34720 pod_ready.go:82] duration metric: took 11.747596ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047550   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047603   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:31:34.047611   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.047617   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.047621   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.053430   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.054003   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.054018   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.054025   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.054029   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.056888   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.057338   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.057358   34720 pod_ready.go:82] duration metric: took 9.802193ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.057367   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.204770   34720 request.go:632] Waited for 147.330113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204839   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204844   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.204851   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.204860   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.209352   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.404334   34720 request.go:632] Waited for 194.306843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404424   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.404441   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.404444   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.408185   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.605268   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.605293   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.605306   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.605311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.608441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.804521   34720 request.go:632] Waited for 195.318558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804587   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804592   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.804600   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.804607   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.808658   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.058569   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.058598   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.058609   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.058614   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.062153   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.204479   34720 request.go:632] Waited for 141.249746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204567   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204575   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.204586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.204594   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.209332   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.558083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.558103   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.558111   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.558116   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.562046   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.605131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.605167   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.605179   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.605184   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.616080   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:36.058179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:36.058207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.058218   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.058236   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.062566   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:36.063353   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:36.063373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.063384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.063390   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.066635   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.067352   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.067373   34720 pod_ready.go:82] duration metric: took 2.009999965s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.067382   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.204802   34720 request.go:632] Waited for 137.362306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204868   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204890   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.204901   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.204907   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.208231   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.404396   34720 request.go:632] Waited for 195.331717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404465   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.404473   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.404477   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.408489   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.409278   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.409299   34720 pod_ready.go:82] duration metric: took 341.910503ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.409308   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.604639   34720 request.go:632] Waited for 195.258772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604699   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604706   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.604716   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.604721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.608453   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.804560   34720 request.go:632] Waited for 195.30805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804622   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.804645   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.804651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.808127   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.808836   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.808857   34720 pod_ready.go:82] duration metric: took 399.543561ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.808867   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.004923   34720 request.go:632] Waited for 195.985958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004973   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004978   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.004985   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.004989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.008223   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.205282   34720 request.go:632] Waited for 196.371879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205357   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205362   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.205369   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.205374   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.208700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.209207   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.209239   34720 pod_ready.go:82] duration metric: took 400.365138ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.209250   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.405282   34720 request.go:632] Waited for 195.959121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405389   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405398   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.405409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.405429   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.409314   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.605347   34720 request.go:632] Waited for 195.282379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.605450   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.605459   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.608764   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.609479   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.609498   34720 pod_ready.go:82] duration metric: took 400.240233ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.609507   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.804579   34720 request.go:632] Waited for 195.010584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804657   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804664   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.804671   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.804675   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.808363   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.005248   34720 request.go:632] Waited for 196.304263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005314   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005321   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.005330   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.005333   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.009635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:38.010535   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.010557   34720 pod_ready.go:82] duration metric: took 401.042919ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.010566   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.204595   34720 request.go:632] Waited for 193.96721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204665   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204677   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.204689   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.204696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.208393   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.404559   34720 request.go:632] Waited for 195.429784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404620   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.404641   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.404646   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.408057   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.408674   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.408694   34720 pod_ready.go:82] duration metric: took 398.12275ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.408703   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.605374   34720 request.go:632] Waited for 196.589593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605437   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.605444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.605449   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.609411   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.804516   34720 request.go:632] Waited for 194.287587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804579   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.804586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.804589   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.808043   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.808604   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.808623   34720 pod_ready.go:82] duration metric: took 399.91394ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.808637   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.004815   34720 request.go:632] Waited for 196.10639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004881   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004887   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.004895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.004900   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.008293   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.204330   34720 request.go:632] Waited for 195.292523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204402   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204410   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.204419   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.204428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.208212   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.208803   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.208826   34720 pod_ready.go:82] duration metric: took 400.181261ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.208843   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.404860   34720 request.go:632] Waited for 195.933233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404913   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404919   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.404926   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.404931   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.408874   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.604903   34720 request.go:632] Waited for 195.413864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604970   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604975   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.604983   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.604987   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.608209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.608764   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.608784   34720 pod_ready.go:82] duration metric: took 399.933732ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.608794   34720 pod_ready.go:39] duration metric: took 16.164585673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:39.608807   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:31:39.608855   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:31:39.626199   34720 api_server.go:72] duration metric: took 37.906495975s to wait for apiserver process to appear ...
	I0930 11:31:39.626228   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:31:39.626249   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:31:39.630779   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:31:39.630856   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:31:39.630864   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.630872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.630879   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.631851   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:31:39.631971   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:31:39.631987   34720 api_server.go:131] duration metric: took 5.751654ms to wait for apiserver health ...
	I0930 11:31:39.631994   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:31:39.805247   34720 request.go:632] Waited for 173.189912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805322   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805328   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.805335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.805339   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.811658   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:39.818704   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:31:39.818737   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818745   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818751   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:39.818754   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:39.818758   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:39.818761   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:39.818766   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:39.818769   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:39.818772   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:39.818777   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:39.818781   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:39.818787   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:39.818792   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:39.818797   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:39.818803   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:39.818809   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:39.818814   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:39.818820   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:39.818828   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:39.818834   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:39.818840   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:39.818843   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:39.818846   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:39.818852   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:39.818855   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:39.818858   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:39.818864   34720 system_pods.go:74] duration metric: took 186.864889ms to wait for pod list to return data ...
	I0930 11:31:39.818873   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:31:40.005326   34720 request.go:632] Waited for 186.370068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005389   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.005396   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.005401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.009301   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.009537   34720 default_sa.go:45] found service account: "default"
	I0930 11:31:40.009555   34720 default_sa.go:55] duration metric: took 190.676192ms for default service account to be created ...
	I0930 11:31:40.009564   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:31:40.205063   34720 request.go:632] Waited for 195.430952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205139   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.205147   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.205150   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.210696   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:40.219002   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:31:40.219052   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219065   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219074   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:40.219081   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:40.219086   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:40.219092   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:40.219097   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:40.219103   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:40.219108   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:40.219115   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:40.219123   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:40.219130   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:40.219137   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:40.219145   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:40.219149   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:40.219155   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:40.219158   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:40.219162   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:40.219168   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:40.219171   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:40.219177   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:40.219181   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:40.219186   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:40.219190   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:40.219193   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:40.219196   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:40.219204   34720 system_pods.go:126] duration metric: took 209.632746ms to wait for k8s-apps to be running ...
	I0930 11:31:40.219213   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:31:40.219257   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:31:40.234570   34720 system_svc.go:56] duration metric: took 15.34883ms WaitForService to wait for kubelet
	I0930 11:31:40.234600   34720 kubeadm.go:582] duration metric: took 38.514901899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:31:40.234618   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:31:40.405060   34720 request.go:632] Waited for 170.372351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405138   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.405146   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.405152   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.409008   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.411040   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411072   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411093   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411098   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411104   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411112   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411118   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411123   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411130   34720 node_conditions.go:105] duration metric: took 176.506295ms to run NodePressure ...
	I0930 11:31:40.411143   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:31:40.411178   34720 start.go:255] writing updated cluster config ...
	I0930 11:31:40.413535   34720 out.go:201] 
	I0930 11:31:40.415246   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:40.415334   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.417113   34720 out.go:177] * Starting "ha-033260-m03" control-plane node in "ha-033260" cluster
	I0930 11:31:40.418650   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:31:40.418678   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:31:40.418775   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:31:40.418789   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:31:40.418878   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.419069   34720 start.go:360] acquireMachinesLock for ha-033260-m03: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:31:40.419116   34720 start.go:364] duration metric: took 28.328µs to acquireMachinesLock for "ha-033260-m03"
	I0930 11:31:40.419128   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:31:40.419133   34720 fix.go:54] fixHost starting: m03
	I0930 11:31:40.419393   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:40.419421   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:40.434730   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0930 11:31:40.435197   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:40.435685   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:40.435709   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:40.436046   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:40.436205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:40.436359   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:31:40.437971   34720 fix.go:112] recreateIfNeeded on ha-033260-m03: state=Stopped err=<nil>
	I0930 11:31:40.437995   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	W0930 11:31:40.438139   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:31:40.440134   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m03" ...
	I0930 11:31:40.441557   34720 main.go:141] libmachine: (ha-033260-m03) Calling .Start
	I0930 11:31:40.441787   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring networks are active...
	I0930 11:31:40.442656   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network default is active
	I0930 11:31:40.442963   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network mk-ha-033260 is active
	I0930 11:31:40.443304   34720 main.go:141] libmachine: (ha-033260-m03) Getting domain xml...
	I0930 11:31:40.443900   34720 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:31:41.716523   34720 main.go:141] libmachine: (ha-033260-m03) Waiting to get IP...
	I0930 11:31:41.717310   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.717755   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.717843   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.717745   36275 retry.go:31] will retry after 213.974657ms: waiting for machine to come up
	I0930 11:31:41.933006   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.933445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.933470   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.933400   36275 retry.go:31] will retry after 366.443935ms: waiting for machine to come up
	I0930 11:31:42.300826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.301240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.301268   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.301200   36275 retry.go:31] will retry after 298.736034ms: waiting for machine to come up
	I0930 11:31:42.601863   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.602344   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.602373   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.602300   36275 retry.go:31] will retry after 422.049065ms: waiting for machine to come up
	I0930 11:31:43.025989   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.026495   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.026518   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.026460   36275 retry.go:31] will retry after 501.182735ms: waiting for machine to come up
	I0930 11:31:43.529199   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.529601   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.529643   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.529556   36275 retry.go:31] will retry after 658.388185ms: waiting for machine to come up
	I0930 11:31:44.189982   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:44.190445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:44.190485   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:44.190396   36275 retry.go:31] will retry after 869.323325ms: waiting for machine to come up
	I0930 11:31:45.061299   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:45.061826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:45.061855   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:45.061762   36275 retry.go:31] will retry after 1.477543518s: waiting for machine to come up
	I0930 11:31:46.540654   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:46.541062   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:46.541088   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:46.541024   36275 retry.go:31] will retry after 1.217619831s: waiting for machine to come up
	I0930 11:31:47.760283   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:47.760670   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:47.760692   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:47.760626   36275 retry.go:31] will retry after 1.524149013s: waiting for machine to come up
	I0930 11:31:49.286693   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:49.287173   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:49.287205   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:49.287119   36275 retry.go:31] will retry after 2.547999807s: waiting for machine to come up
	I0930 11:31:51.836378   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:51.836878   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:51.836903   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:51.836847   36275 retry.go:31] will retry after 3.478582774s: waiting for machine to come up
	I0930 11:31:55.318753   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:55.319267   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:55.319288   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:55.319225   36275 retry.go:31] will retry after 4.232487143s: waiting for machine to come up
	I0930 11:31:59.554587   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555031   34720 main.go:141] libmachine: (ha-033260-m03) Found IP for machine: 192.168.39.238
	I0930 11:31:59.555054   34720 main.go:141] libmachine: (ha-033260-m03) Reserving static IP address...
	I0930 11:31:59.555067   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555464   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.555482   34720 main.go:141] libmachine: (ha-033260-m03) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"}
	I0930 11:31:59.555498   34720 main.go:141] libmachine: (ha-033260-m03) Reserved static IP address: 192.168.39.238
	I0930 11:31:59.555507   34720 main.go:141] libmachine: (ha-033260-m03) Waiting for SSH to be available...
	I0930 11:31:59.555514   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:31:59.558171   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558619   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.558660   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558780   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:31:59.558806   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:31:59.558840   34720 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:31:59.558849   34720 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:31:59.558869   34720 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:31:59.689497   34720 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 11:31:59.689854   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:31:59.690426   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:31:59.692709   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693063   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.693096   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693354   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:59.693555   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:31:59.693570   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:59.693768   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.695742   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696024   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.696050   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.696286   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696441   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696600   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.696763   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.696989   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.697005   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:31:59.810353   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:31:59.810380   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810618   34720 buildroot.go:166] provisioning hostname "ha-033260-m03"
	I0930 11:31:59.810647   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810829   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.813335   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813637   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.813661   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813848   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.814001   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814334   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.814507   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.814661   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.814672   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m03 && echo "ha-033260-m03" | sudo tee /etc/hostname
	I0930 11:31:59.949653   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m03
	
	I0930 11:31:59.949686   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.952597   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.952969   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.952992   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.953242   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.953469   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953637   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953759   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.953884   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.954062   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.954084   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:00.079890   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:00.079918   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:00.079939   34720 buildroot.go:174] setting up certificates
	I0930 11:32:00.079950   34720 provision.go:84] configureAuth start
	I0930 11:32:00.079961   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:32:00.080205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:00.082895   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083281   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.083307   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083437   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.085443   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085756   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.085776   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085897   34720 provision.go:143] copyHostCerts
	I0930 11:32:00.085925   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.085978   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:00.085987   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.086050   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:00.086121   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086137   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:00.086142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:00.086219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086243   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:00.086252   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086288   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:00.086360   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m03 san=[127.0.0.1 192.168.39.238 ha-033260-m03 localhost minikube]
	I0930 11:32:00.252602   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:00.252654   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:00.252676   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.255361   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255706   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.255731   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255860   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.255996   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.256131   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.256249   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.345059   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:00.345126   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:00.370752   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:00.370827   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:32:00.397640   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:00.397703   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:00.424094   34720 provision.go:87] duration metric: took 344.128805ms to configureAuth
	I0930 11:32:00.424128   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:00.424360   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:00.424480   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.427139   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427536   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.427563   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427770   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.427949   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428043   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428125   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.428217   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.428408   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.428424   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:00.687881   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:00.687919   34720 machine.go:96] duration metric: took 994.35116ms to provisionDockerMachine
	I0930 11:32:00.687935   34720 start.go:293] postStartSetup for "ha-033260-m03" (driver="kvm2")
	I0930 11:32:00.687950   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:00.687976   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.688322   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:00.688349   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.691216   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691735   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.691763   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691959   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.692185   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.692344   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.692469   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.781946   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:00.786396   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:00.786417   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:00.786494   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:00.786562   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:00.786571   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:00.786646   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:00.796771   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:00.822239   34720 start.go:296] duration metric: took 134.285857ms for postStartSetup
	I0930 11:32:00.822297   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.822594   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:00.822622   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.825375   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825743   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.825764   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825954   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.826142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.826331   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.826492   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.912681   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:00.912751   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:00.970261   34720 fix.go:56] duration metric: took 20.551120789s for fixHost
	I0930 11:32:00.970311   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.973284   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973694   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.973722   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973873   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.974035   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974161   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974267   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.974426   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.974622   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.974633   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:01.099052   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695921.066520843
	
	I0930 11:32:01.099078   34720 fix.go:216] guest clock: 1727695921.066520843
	I0930 11:32:01.099089   34720 fix.go:229] Guest: 2024-09-30 11:32:01.066520843 +0000 UTC Remote: 2024-09-30 11:32:00.970290394 +0000 UTC m=+397.697421093 (delta=96.230449ms)
	I0930 11:32:01.099110   34720 fix.go:200] guest clock delta is within tolerance: 96.230449ms
	I0930 11:32:01.099117   34720 start.go:83] releasing machines lock for "ha-033260-m03", held for 20.679993634s
	I0930 11:32:01.099137   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.099384   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:01.102141   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.102593   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.102620   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.104827   34720 out.go:177] * Found network options:
	I0930 11:32:01.106181   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3
	W0930 11:32:01.107308   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.107329   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.107343   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.107885   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108079   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108167   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:01.108222   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:32:01.108292   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.108316   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.108408   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:01.108430   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:01.111240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111542   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111663   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111698   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111858   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.111861   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111893   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.112028   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.112064   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112182   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112189   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112347   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.112360   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112529   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.339136   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:01.345573   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:01.345659   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:01.362608   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:01.362630   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:01.362686   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:01.381024   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:01.396259   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:01.396333   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:01.412406   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:01.429258   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:01.562463   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:01.730591   34720 docker.go:233] disabling docker service ...
	I0930 11:32:01.730664   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:01.755797   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:01.769489   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:01.890988   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:02.019465   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:02.036168   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:02.059913   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:02.059981   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.072160   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:02.072247   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.084599   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.096290   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.108573   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:02.120977   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.132246   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.150591   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.162524   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:02.173575   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:02.173660   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:02.188268   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:02.199979   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:02.326960   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:02.439885   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:02.439960   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:02.446734   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:02.446849   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:02.451344   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:02.492029   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:02.492116   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.521734   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.556068   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:02.557555   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:02.558901   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:02.560920   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:02.563759   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564191   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:02.564218   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564482   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:02.569571   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:02.585245   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:02.585463   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:02.585746   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.585790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.617422   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0930 11:32:02.617831   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.618295   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.618314   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.618694   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.618907   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:02.621016   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:02.621337   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.621378   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.636969   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46463
	I0930 11:32:02.637538   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.638051   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.638068   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.638431   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.638769   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:02.639005   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.238
	I0930 11:32:02.639018   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:02.639031   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:02.639158   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:02.639204   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:02.639213   34720 certs.go:256] generating profile certs ...
	I0930 11:32:02.639277   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:32:02.639334   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37
	I0930 11:32:02.639369   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:32:02.639382   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:02.639398   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:02.639410   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:02.639423   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:02.639436   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:32:02.639451   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:32:02.639464   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:32:02.639477   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:32:02.639526   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:02.639556   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:02.639565   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:02.639587   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:02.639609   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:02.639654   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:02.639691   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:02.639715   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:02.639728   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:02.639740   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:02.639764   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:32:02.643357   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.643807   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:32:02.643839   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.644023   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:32:02.644227   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:32:02.644414   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:32:02.644553   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:32:02.726043   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:32:02.732664   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:32:02.744611   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:32:02.750045   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:32:02.763417   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:32:02.768220   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:32:02.780605   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:32:02.786158   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:32:02.802503   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:32:02.809377   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:32:02.821900   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:32:02.827740   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:32:02.842110   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:02.872987   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:02.903102   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:02.932917   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:02.966742   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:32:02.995977   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:32:03.025802   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:32:03.057227   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:32:03.085425   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:03.115042   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:03.142328   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:03.168248   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:32:03.189265   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:32:03.208719   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:32:03.227953   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:32:03.248805   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:32:03.268786   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:32:03.288511   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:32:03.309413   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:03.315862   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:03.328610   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333839   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333909   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.340595   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:03.353343   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:03.364689   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369580   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369669   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.376067   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:03.388290   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:03.400003   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405168   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405235   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.411812   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:03.424569   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:03.429588   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:32:03.436748   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:32:03.443675   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:32:03.450618   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:32:03.457889   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:32:03.464815   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:32:03.471778   34720 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.1 crio true true} ...
	I0930 11:32:03.471887   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:03.471924   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:32:03.471974   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:32:03.490629   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:32:03.490701   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:32:03.490761   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:03.502695   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:03.502771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:32:03.514300   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:03.532840   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:03.552583   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:32:03.570717   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:03.574725   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:03.588635   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.736031   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.755347   34720 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:32:03.755606   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:03.757343   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:03.758664   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.930799   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.947764   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:03.948004   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:03.948058   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:03.948281   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.948378   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:03.948390   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.948398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.948408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.951644   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.952631   34720 node_ready.go:49] node "ha-033260-m03" has status "Ready":"True"
	I0930 11:32:03.952655   34720 node_ready.go:38] duration metric: took 4.354654ms for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.952666   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:03.952741   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:03.952751   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.952758   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.952763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.959043   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:03.966223   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:03.966318   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:03.966326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.966334   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.966341   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.969582   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.970409   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:03.970425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.970433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.970436   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.973995   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.466604   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.466626   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.466634   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.466638   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.470966   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.470982   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.470989   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470994   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.473518   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:04.966613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.966634   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.966642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.966647   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.970295   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.971225   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.971247   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.971256   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.971267   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.974506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.466575   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.466597   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.466605   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.466609   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.471476   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.472347   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.472369   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.472379   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.472385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.476605   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.966462   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.966484   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.966495   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.966499   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.970347   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.971438   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.971455   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.971465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.971469   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.975635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.976454   34720 pod_ready.go:103] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:06.466781   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.466807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.466818   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.466825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.470300   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.471083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.471100   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.471108   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.471111   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.474455   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.966864   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.966887   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.966895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.966899   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.970946   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:06.971993   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.972007   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.972014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.972021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.975563   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.466626   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.466651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.466664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.466671   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.471030   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:07.471751   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.471767   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.471775   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.471780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.475078   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.966446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.966464   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.966472   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.966476   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.970130   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.970892   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.970907   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.970916   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.970921   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.974558   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.467355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:08.467382   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.467392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.467398   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.491602   34720 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0930 11:32:08.492458   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.492478   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.492488   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.492494   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.504709   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:32:08.505926   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.505961   34720 pod_ready.go:82] duration metric: took 4.539705143s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.505976   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.506053   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:08.506070   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.506079   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.506091   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.513015   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:08.514472   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.514492   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.514500   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.514504   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.522097   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:08.522597   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.522620   34720 pod_ready.go:82] duration metric: took 16.634648ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522632   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522710   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:08.522720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.522730   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.522736   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.528114   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:08.529205   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.529222   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.529239   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.529245   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.532511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.533059   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.533085   34720 pod_ready.go:82] duration metric: took 10.444686ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533097   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:08.533175   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.533185   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.533194   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.536360   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.537030   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:08.537046   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.537054   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.537058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.540241   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.540684   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.540702   34720 pod_ready.go:82] duration metric: took 7.598243ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540712   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540774   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:08.540782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.540789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.540794   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.544599   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.545135   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:08.545150   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.545158   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.545161   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.548627   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.041691   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.041715   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.041724   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.041728   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.045686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.046390   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.046409   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.046420   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.046428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.050351   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.541239   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.541273   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.541285   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.541291   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.544605   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.545287   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.545303   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.545311   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.545314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.548353   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.041331   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.041356   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.041368   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.041373   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.045200   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.046010   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.046031   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.046039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.046046   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.049179   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.541488   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.541515   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.541528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.541536   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.545641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:10.546377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.546400   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.546407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.546410   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.549732   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.550616   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:11.040952   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.040974   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.040982   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.040989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.046528   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:11.047555   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.047571   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.047581   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.047586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.051499   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:11.541109   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.541139   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.541149   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.541154   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.545483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:11.546103   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.546119   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.546130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.546136   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.549272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:12.041130   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.041165   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.041176   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.041182   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.045465   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.046261   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.046277   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.046284   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.046289   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.054233   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:12.540971   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.540992   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.541000   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.541004   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.545075   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.545773   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.545789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.545799   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.545805   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.549003   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.041785   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.041807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.041817   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.041823   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.045506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.046197   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.046214   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.046221   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.046241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.048544   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:13.048911   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:13.541700   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.541728   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.541740   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.541748   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.545726   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.546727   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.546742   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.546749   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.546753   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.549687   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:14.041571   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.041593   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.041601   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.041605   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.045629   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.047164   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.047185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.047199   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.047203   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.052005   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:14.541017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.541043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.541055   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.541060   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.545027   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.546245   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.546266   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.546275   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.546280   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.549572   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.041446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.041468   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.041477   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.041481   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.045111   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.045983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.046004   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.046014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.046021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.055916   34720 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0930 11:32:15.056489   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:15.541417   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.541448   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.541460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.541465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.544952   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.545764   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.545781   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.545790   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.545795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.552050   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:16.040979   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.041003   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.041011   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.041016   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.045765   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:16.046411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.046427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.046435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.046439   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.056745   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:32:16.541660   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.541682   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.541692   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.541696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.545213   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:16.546092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.546110   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.546121   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.546126   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.548900   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.041375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.041399   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.041411   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.041417   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.045641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:17.046588   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.046611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.046621   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.046628   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.049632   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.541651   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.541676   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.541686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.541692   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.545407   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:17.546246   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.546269   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.546282   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.546290   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.549117   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.549778   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:18.041518   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.041556   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.041568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.041576   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:18.046748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.046769   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.046780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046787   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.052283   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:18.541399   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.541425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.541433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.541437   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.545011   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:18.546056   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.546078   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.546089   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.546097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.549203   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.041166   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.041201   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.041210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.041214   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.045755   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.046481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.046500   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.046510   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.046517   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.049924   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.541836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.541873   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.541885   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.541893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.546183   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.547097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.547116   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.547126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.547130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.551235   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.551688   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:20.041000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.041027   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.041039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.041053   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.045149   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.045912   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.045934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.045945   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.045950   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.049525   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:20.541792   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.541813   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.541821   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.541825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.546083   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.546947   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.546969   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.546980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.546988   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.551303   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:21.041910   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.041938   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.041950   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.041955   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.047824   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:21.048523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.048544   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.048555   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.048560   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.051690   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.541671   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.541695   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.541707   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.541714   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.545187   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.545925   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.545943   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.545957   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.549146   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.040908   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.040934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.040944   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.040949   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.044322   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.045253   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.045275   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.045286   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.045311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.048540   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.049217   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:22.541377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.541397   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.541405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.596027   34720 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0930 11:32:22.596840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.596858   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.596868   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.596876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.600101   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.041796   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.041817   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.041826   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.041830   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.046144   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:23.047374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.047396   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.047407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.047412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.051210   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.541365   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.541391   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.541403   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.544624   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.545332   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.545348   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.545356   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.545362   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.548076   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.040942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.040985   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.040995   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.040999   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.044909   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.045625   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.045642   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.045653   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.045658   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.048446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.541477   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.541497   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.541506   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.541509   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.545585   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:24.546447   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.546460   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.546468   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.546472   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.549497   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.550184   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:25.041599   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.041635   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.041645   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.041651   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.048106   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:25.048975   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.048998   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.049008   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.049013   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.054165   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:25.541178   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.541223   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.541235   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.541241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.545143   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:25.545923   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.545941   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.545962   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.549975   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.041161   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.041185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.041193   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.041199   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.045231   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:26.046025   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.046042   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.046049   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.046055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.048864   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:26.541487   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.541511   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.541521   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.541528   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.548114   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:26.548980   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.548993   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.549001   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.549005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.552757   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.553360   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:27.041590   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.041611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.041636   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.041639   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.046112   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:27.047076   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.047092   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.047100   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.047104   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.052347   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:27.541767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.541789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.541797   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.541801   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.545090   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:27.545664   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.545678   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.545686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.545690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.548839   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.041179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.041200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.041212   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.041217   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.046396   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:28.047355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.047372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.047384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.047388   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.053891   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:28.541237   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.541259   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.541268   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.541271   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545192   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.545941   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.545959   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.545967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.549204   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.550435   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.550457   34720 pod_ready.go:82] duration metric: took 20.009736872s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550559   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:32:28.550570   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.550580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.550590   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.553686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.554394   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:28.554407   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.554414   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.554420   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.556924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.557578   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.557600   34720 pod_ready.go:82] duration metric: took 7.108562ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557612   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557692   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:32:28.557702   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.557712   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.557722   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.560446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.561014   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:28.561029   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.561036   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.561040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.563867   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.564450   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.564468   34720 pod_ready.go:82] duration metric: took 6.836659ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:28.564568   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.564578   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.564586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.567937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.568639   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.568653   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.568661   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.568664   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.571277   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:29.065431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.065458   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.065466   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.065469   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.069406   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.070004   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.070020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.070028   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.070033   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.073076   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.565018   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.565043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.565052   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.565055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.568350   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.569071   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.569090   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.569101   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.569107   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.572794   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.065688   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.065710   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.065717   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.065721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.069593   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.070370   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.070385   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.070393   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.070397   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.073099   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.565351   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.565372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.565380   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.565385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.568480   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.569460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.569481   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.569489   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.569493   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.572043   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.572542   34720 pod_ready.go:103] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:31.064934   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:31.064954   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.064963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.064967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.069154   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:31.070615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.070631   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.070642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.070648   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.073638   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.074233   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.074258   34720 pod_ready.go:82] duration metric: took 2.50976614s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074273   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:32:31.074392   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.074418   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.074427   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.077429   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.078309   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:31.078326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.078336   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.078343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.080937   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.081321   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.081341   34720 pod_ready.go:82] duration metric: took 7.059128ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081353   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081418   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:32:31.081428   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.081438   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.081447   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.084351   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.084930   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:31.084944   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.084951   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.084956   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.087905   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.088473   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.088493   34720 pod_ready.go:82] duration metric: took 7.129947ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.088504   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.141826   34720 request.go:632] Waited for 53.255293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141907   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141915   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.141924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.141929   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.145412   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.341415   34720 request.go:632] Waited for 195.313156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341506   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.341520   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.341524   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.344937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.589605   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.589637   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.589646   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.589651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.593330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.741775   34720 request.go:632] Waited for 147.33103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741847   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.741857   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.741869   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.745796   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.089735   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.089761   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.089772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.089776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.093492   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.141705   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.141744   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.141752   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.141757   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.145662   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.589384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.589408   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.589418   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.589426   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.592976   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.593954   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.593971   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.593979   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.593983   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.597157   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.089690   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:33.089720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.089733   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.089743   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.094817   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:33.095412   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:33.095427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.095435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.095442   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.098967   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.099551   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.099569   34720 pod_ready.go:82] duration metric: took 2.011056626s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.099580   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.141920   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:32:33.141953   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.141961   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.141965   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.146176   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:33.342278   34720 request.go:632] Waited for 195.329061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342343   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342351   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.342362   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.342368   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.346051   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.346626   34720 pod_ready.go:98] node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346650   34720 pod_ready.go:82] duration metric: took 247.062207ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	E0930 11:32:33.346662   34720 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346673   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.541732   34720 request.go:632] Waited for 194.984853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541823   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541832   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.541839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.541846   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.545738   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.741681   34720 request.go:632] Waited for 195.307104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741746   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741753   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.741839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.741853   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.745711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.746422   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.746442   34720 pod_ready.go:82] duration metric: took 399.762428ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.746454   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.941491   34720 request.go:632] Waited for 194.974915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941575   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.941582   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.941585   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.945250   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.142081   34720 request.go:632] Waited for 196.05781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142187   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142199   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.142207   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.142211   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.146079   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.146737   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.146756   34720 pod_ready.go:82] duration metric: took 400.295141ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.146770   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.342040   34720 request.go:632] Waited for 195.196365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342146   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342159   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.342171   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.342181   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.345711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.541794   34720 request.go:632] Waited for 195.310617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541870   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541876   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.541884   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.541889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.545585   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.546141   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.546158   34720 pod_ready.go:82] duration metric: took 399.379827ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.546174   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.742192   34720 request.go:632] Waited for 195.896441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742266   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742272   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.742279   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.742283   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.745382   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.941671   34720 request.go:632] Waited for 195.443927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941750   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941755   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.941763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.941767   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.945425   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.946182   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.946207   34720 pod_ready.go:82] duration metric: took 400.022007ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.946220   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.142264   34720 request.go:632] Waited for 195.977294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142349   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142355   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.142363   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.142372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.146093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.342119   34720 request.go:632] Waited for 195.354718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342174   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342179   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.342185   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.342189   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.345678   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.346226   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.346244   34720 pod_ready.go:82] duration metric: took 400.013115ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.346253   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.541907   34720 request.go:632] Waited for 195.545182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541986   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541995   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.542006   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.542018   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.545604   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.741571   34720 request.go:632] Waited for 195.370489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741659   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741667   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.741678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.741690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.745574   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.746159   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.746179   34720 pod_ready.go:82] duration metric: took 399.919057ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.746193   34720 pod_ready.go:39] duration metric: took 31.793515417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:35.746211   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:32:35.746295   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:32:35.770439   34720 api_server.go:72] duration metric: took 32.015036347s to wait for apiserver process to appear ...
	I0930 11:32:35.770467   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:32:35.770491   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:32:35.775724   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:32:35.775811   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:32:35.775820   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.775829   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.775838   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.776730   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:32:35.776791   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:32:35.776806   34720 api_server.go:131] duration metric: took 6.332786ms to wait for apiserver health ...
	I0930 11:32:35.776814   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:32:35.942219   34720 request.go:632] Waited for 165.338166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942284   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942290   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.942302   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.942308   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.948613   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:35.956880   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:32:35.956918   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:35.956927   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:35.956932   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:35.956938   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:35.956942   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:35.956947   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:35.956951   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:35.956956   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:35.956960   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:35.956965   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:35.956971   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:35.956977   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:35.956988   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:35.956996   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:35.957001   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:35.957009   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:35.957014   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:35.957019   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:35.957027   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:35.957033   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:35.957041   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:35.957046   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:35.957053   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:35.957058   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:35.957066   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:35.957070   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:35.957081   34720 system_pods.go:74] duration metric: took 180.260558ms to wait for pod list to return data ...
	I0930 11:32:35.957093   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:32:36.141557   34720 request.go:632] Waited for 184.369505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141646   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141655   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.141664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.141669   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.146009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.146146   34720 default_sa.go:45] found service account: "default"
	I0930 11:32:36.146163   34720 default_sa.go:55] duration metric: took 189.061389ms for default service account to be created ...
	I0930 11:32:36.146176   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:32:36.341683   34720 request.go:632] Waited for 195.43917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341772   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.341789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.341795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.348026   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:36.355936   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:32:36.355974   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:36.355980   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:36.355985   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:36.355989   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:36.355993   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:36.355997   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:36.356000   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:36.356003   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:36.356007   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:36.356011   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:36.356015   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:36.356019   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:36.356022   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:36.356025   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:36.356028   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:36.356031   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:36.356034   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:36.356037   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:36.356041   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:36.356044   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:36.356050   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:36.356053   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:36.356059   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:36.356062   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:36.356065   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:36.356068   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:36.356075   34720 system_pods.go:126] duration metric: took 209.893533ms to wait for k8s-apps to be running ...
	I0930 11:32:36.356084   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:32:36.356128   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:32:36.376905   34720 system_svc.go:56] duration metric: took 20.807413ms WaitForService to wait for kubelet
	I0930 11:32:36.376934   34720 kubeadm.go:582] duration metric: took 32.621540674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:32:36.376952   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:32:36.541278   34720 request.go:632] Waited for 164.265532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541328   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541345   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.541372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.541378   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.545532   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.546930   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546950   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546960   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546964   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546970   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546975   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546980   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546984   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546989   34720 node_conditions.go:105] duration metric: took 170.032136ms to run NodePressure ...
	I0930 11:32:36.547003   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:32:36.547027   34720 start.go:255] writing updated cluster config ...
	I0930 11:32:36.548771   34720 out.go:201] 
	I0930 11:32:36.549990   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:36.550071   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.551533   34720 out.go:177] * Starting "ha-033260-m04" worker node in "ha-033260" cluster
	I0930 11:32:36.552654   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:32:36.552671   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:32:36.552768   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:32:36.552782   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:32:36.552887   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.553084   34720 start.go:360] acquireMachinesLock for ha-033260-m04: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:32:36.553130   34720 start.go:364] duration metric: took 26.329µs to acquireMachinesLock for "ha-033260-m04"
	I0930 11:32:36.553148   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:32:36.553160   34720 fix.go:54] fixHost starting: m04
	I0930 11:32:36.553451   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:36.553481   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:36.569922   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0930 11:32:36.570471   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:36.571044   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:36.571066   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:36.571377   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:36.571578   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:36.571759   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:32:36.573541   34720 fix.go:112] recreateIfNeeded on ha-033260-m04: state=Stopped err=<nil>
	I0930 11:32:36.573570   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	W0930 11:32:36.573771   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:32:36.575555   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m04" ...
	I0930 11:32:36.576772   34720 main.go:141] libmachine: (ha-033260-m04) Calling .Start
	I0930 11:32:36.576973   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring networks are active...
	I0930 11:32:36.577708   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network default is active
	I0930 11:32:36.578046   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network mk-ha-033260 is active
	I0930 11:32:36.578396   34720 main.go:141] libmachine: (ha-033260-m04) Getting domain xml...
	I0930 11:32:36.579052   34720 main.go:141] libmachine: (ha-033260-m04) Creating domain...
	I0930 11:32:37.876264   34720 main.go:141] libmachine: (ha-033260-m04) Waiting to get IP...
	I0930 11:32:37.877213   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:37.877645   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:37.877707   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:37.877598   36596 retry.go:31] will retry after 232.490022ms: waiting for machine to come up
	I0930 11:32:38.112070   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.112572   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.112594   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.112550   36596 retry.go:31] will retry after 256.882229ms: waiting for machine to come up
	I0930 11:32:38.371192   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.371815   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.371840   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.371754   36596 retry.go:31] will retry after 461.059855ms: waiting for machine to come up
	I0930 11:32:38.834060   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.834574   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.834602   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.834535   36596 retry.go:31] will retry after 561.972608ms: waiting for machine to come up
	I0930 11:32:39.398393   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:39.398837   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:39.398861   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:39.398804   36596 retry.go:31] will retry after 603.760478ms: waiting for machine to come up
	I0930 11:32:40.004623   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.004981   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.005003   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.004944   36596 retry.go:31] will retry after 795.659949ms: waiting for machine to come up
	I0930 11:32:40.802044   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.802482   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.802507   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.802432   36596 retry.go:31] will retry after 876.600506ms: waiting for machine to come up
	I0930 11:32:41.680956   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:41.681439   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:41.681475   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:41.681410   36596 retry.go:31] will retry after 1.356578507s: waiting for machine to come up
	I0930 11:32:43.039790   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:43.040245   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:43.040273   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:43.040181   36596 retry.go:31] will retry after 1.138308059s: waiting for machine to come up
	I0930 11:32:44.180454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:44.180880   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:44.180912   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:44.180838   36596 retry.go:31] will retry after 1.724095206s: waiting for machine to come up
	I0930 11:32:45.906969   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:45.907551   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:45.907580   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:45.907505   36596 retry.go:31] will retry after 2.79096153s: waiting for machine to come up
	I0930 11:32:48.699904   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:48.700403   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:48.700433   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:48.700358   36596 retry.go:31] will retry after 2.880773223s: waiting for machine to come up
	I0930 11:32:51.582182   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:51.582528   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:51.582553   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:51.582515   36596 retry.go:31] will retry after 3.567167233s: waiting for machine to come up
	I0930 11:32:55.151238   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.151679   34720 main.go:141] libmachine: (ha-033260-m04) Found IP for machine: 192.168.39.104
	I0930 11:32:55.151704   34720 main.go:141] libmachine: (ha-033260-m04) Reserving static IP address...
	I0930 11:32:55.151717   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has current primary IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.152141   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.152161   34720 main.go:141] libmachine: (ha-033260-m04) Reserved static IP address: 192.168.39.104
	I0930 11:32:55.152180   34720 main.go:141] libmachine: (ha-033260-m04) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"}
	I0930 11:32:55.152198   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Getting to WaitForSSH function...
	I0930 11:32:55.152212   34720 main.go:141] libmachine: (ha-033260-m04) Waiting for SSH to be available...
	I0930 11:32:55.154601   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.154955   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.154984   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.155062   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH client type: external
	I0930 11:32:55.155094   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa (-rw-------)
	I0930 11:32:55.155127   34720 main.go:141] libmachine: (ha-033260-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:32:55.155140   34720 main.go:141] libmachine: (ha-033260-m04) DBG | About to run SSH command:
	I0930 11:32:55.155169   34720 main.go:141] libmachine: (ha-033260-m04) DBG | exit 0
	I0930 11:32:55.282203   34720 main.go:141] libmachine: (ha-033260-m04) DBG | SSH cmd err, output: <nil>: 
	I0930 11:32:55.282534   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetConfigRaw
	I0930 11:32:55.283161   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.286073   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286485   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.286510   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286784   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:55.287029   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:32:55.287049   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:55.287272   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.289455   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.289920   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.289948   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.290156   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.290326   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290453   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290576   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.290707   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.290900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.290913   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:32:55.398165   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:32:55.398197   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398448   34720 buildroot.go:166] provisioning hostname "ha-033260-m04"
	I0930 11:32:55.398492   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398697   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.401792   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.402275   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402458   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.402629   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402793   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402918   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.403113   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.403282   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.403294   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m04 && echo "ha-033260-m04" | sudo tee /etc/hostname
	I0930 11:32:55.531966   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m04
	
	I0930 11:32:55.531997   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.535254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535632   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.535675   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535815   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.536008   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536169   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536305   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.536447   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.536613   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.536629   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:55.658892   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:55.658919   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:55.658936   34720 buildroot.go:174] setting up certificates
	I0930 11:32:55.658945   34720 provision.go:84] configureAuth start
	I0930 11:32:55.658953   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.659243   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.662312   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662773   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.662798   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662957   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.665302   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665663   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.665690   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665764   34720 provision.go:143] copyHostCerts
	I0930 11:32:55.665796   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665833   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:55.665842   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665927   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:55.666021   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666039   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:55.666047   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666074   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:55.666119   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666136   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:55.666142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:55.666213   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m04 san=[127.0.0.1 192.168.39.104 ha-033260-m04 localhost minikube]
	I0930 11:32:55.889392   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:55.889469   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:55.889499   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.892080   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892386   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.892413   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892551   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.892776   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.892978   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.893178   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:55.976164   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:55.976265   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:56.003465   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:56.003537   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:32:56.030648   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:56.030726   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:56.059845   34720 provision.go:87] duration metric: took 400.888299ms to configureAuth
	I0930 11:32:56.059878   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:56.060173   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:56.060271   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.063160   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063613   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.063639   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063847   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.064052   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064240   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064367   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.064511   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.064690   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.064709   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:56.291657   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:56.291682   34720 machine.go:96] duration metric: took 1.004640971s to provisionDockerMachine
	I0930 11:32:56.291696   34720 start.go:293] postStartSetup for "ha-033260-m04" (driver="kvm2")
	I0930 11:32:56.291709   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:56.291730   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.292023   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:56.292057   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.294563   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.294915   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.294940   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.295103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.295280   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.295424   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.295532   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.385215   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:56.389877   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:56.389903   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:56.389972   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:56.390073   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:56.390086   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:56.390178   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:56.400442   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:56.429361   34720 start.go:296] duration metric: took 137.644684ms for postStartSetup
	I0930 11:32:56.429427   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.429716   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:56.429741   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.432628   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433129   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.433159   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433319   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.433495   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.433694   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.433867   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.520351   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:56.520411   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:56.579433   34720 fix.go:56] duration metric: took 20.026269147s for fixHost
	I0930 11:32:56.579489   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.582670   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583091   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.583121   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583274   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.583494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583682   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583865   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.584063   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.584279   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.584294   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:56.698854   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695976.655532462
	
	I0930 11:32:56.698887   34720 fix.go:216] guest clock: 1727695976.655532462
	I0930 11:32:56.698900   34720 fix.go:229] Guest: 2024-09-30 11:32:56.655532462 +0000 UTC Remote: 2024-09-30 11:32:56.579461897 +0000 UTC m=+453.306592605 (delta=76.070565ms)
	I0930 11:32:56.698920   34720 fix.go:200] guest clock delta is within tolerance: 76.070565ms
	I0930 11:32:56.698927   34720 start.go:83] releasing machines lock for "ha-033260-m04", held for 20.145784895s
	I0930 11:32:56.698949   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.699224   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:56.702454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.702852   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.702883   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.705376   34720 out.go:177] * Found network options:
	I0930 11:32:56.706947   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	W0930 11:32:56.708247   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708274   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708287   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.708308   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.708969   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709162   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709279   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:56.709323   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	W0930 11:32:56.709360   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709386   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709401   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.709475   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:56.709494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.712173   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712335   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712568   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712592   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712731   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.712845   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712870   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712874   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.712987   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.713033   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.713168   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.713207   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713330   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.934813   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:56.941348   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:56.941419   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:56.960961   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:56.960992   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:56.961081   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:56.980594   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:56.996216   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:56.996273   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:57.013214   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:57.028755   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:57.149354   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:57.318133   34720 docker.go:233] disabling docker service ...
	I0930 11:32:57.318197   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:57.334364   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:57.349711   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:57.496565   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:57.627318   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:57.643513   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:57.667655   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:57.667720   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.680838   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:57.680907   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.693421   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.705291   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.717748   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:57.730805   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.742351   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.761934   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.773112   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:57.783201   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:57.783257   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:57.797812   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:57.813538   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:57.938077   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:58.044521   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:58.044587   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:58.049533   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:58.049596   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:58.053988   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:58.101662   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:58.101732   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.132323   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.163597   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:58.164981   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:58.166271   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:58.167862   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	I0930 11:32:58.169165   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:58.172162   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172529   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:58.172550   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172762   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:58.178993   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.194096   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:58.194385   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.194741   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.194790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.210665   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0930 11:32:58.211101   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.211610   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.211628   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.211954   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.212130   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:58.213485   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:58.213820   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.213854   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.228447   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
	I0930 11:32:58.228877   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.229355   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.229373   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.229837   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.230027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:58.230180   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.104
	I0930 11:32:58.230191   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:58.230204   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:58.230340   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:58.230387   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:58.230397   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:58.230409   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:58.230422   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:58.230434   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:58.230491   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:58.230521   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:58.230531   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:58.230554   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:58.230577   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:58.230597   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:58.230650   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:58.230688   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.230705   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.230732   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.230759   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:58.258115   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:58.284212   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:58.311332   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:58.336428   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:58.362719   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:58.389689   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:58.416593   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:58.423417   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:58.435935   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442361   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442428   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.448829   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:58.461056   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:58.473436   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478046   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478120   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.484917   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:58.497497   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:58.509506   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514695   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514766   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.521000   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:58.533195   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:58.538066   34720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:32:58.538108   34720 kubeadm.go:934] updating node {m04 192.168.39.104 0 v1.31.1 crio false true} ...
	I0930 11:32:58.538196   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:58.538246   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:58.549564   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:58.549678   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0930 11:32:58.561086   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:58.581046   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:58.599680   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:58.603972   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.618040   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.758745   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.778316   34720 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0930 11:32:58.778666   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.780417   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:58.781848   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.954652   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.980788   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:58.981140   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:58.981229   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:58.981531   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:58.981654   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:58.981668   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:58.981678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:58.981682   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:58.985441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.482501   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:59.482522   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.482530   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.482534   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.485809   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.486316   34720 node_ready.go:49] node "ha-033260-m04" has status "Ready":"True"
	I0930 11:32:59.486339   34720 node_ready.go:38] duration metric: took 504.792648ms for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:59.486347   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:59.486421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:59.486437   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.486444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.486448   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.491643   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:59.500880   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.501000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:59.501020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.501033   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.501040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.504511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.505105   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.505120   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.505126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.505130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.508330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.508816   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.508834   34720 pod_ready.go:82] duration metric: took 7.916953ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508846   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508911   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:59.508921   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.508931   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.508940   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.512254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.513133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.513147   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.513157   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.513162   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.516730   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.517273   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.517290   34720 pod_ready.go:82] duration metric: took 8.437165ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517301   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517361   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:59.517370   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.517380   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.517387   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521073   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.521748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.521764   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.521772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.524702   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.525300   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.525316   34720 pod_ready.go:82] duration metric: took 8.008761ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525325   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:59.525383   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.525390   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.525393   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.528314   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.528898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:59.528914   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.528924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.528930   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.531717   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.532229   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.532246   34720 pod_ready.go:82] duration metric: took 6.914296ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.532257   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.682582   34720 request.go:632] Waited for 150.25854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682645   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.682658   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.682662   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.689539   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:59.883130   34720 request.go:632] Waited for 192.41473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.883210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.883232   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.887618   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:59.888108   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.888129   34720 pod_ready.go:82] duration metric: took 355.865471ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.888150   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.083448   34720 request.go:632] Waited for 195.22183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083541   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083549   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.083560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.083571   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.087440   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.283491   34720 request.go:632] Waited for 195.322885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.283590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.283596   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.287218   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.287959   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.287982   34720 pod_ready.go:82] duration metric: took 399.823014ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.287995   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.483353   34720 request.go:632] Waited for 195.279455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483436   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483446   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.483457   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.483468   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.487640   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:00.682537   34720 request.go:632] Waited for 194.177349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682623   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.682632   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.682641   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.686128   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.686721   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.686744   34720 pod_ready.go:82] duration metric: took 398.740461ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.686757   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.882895   34720 request.go:632] Waited for 196.06624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882956   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.882963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.882967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.887704   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.082816   34720 request.go:632] Waited for 194.378573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082908   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.082920   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.082928   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.086938   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.088023   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.088045   34720 pod_ready.go:82] duration metric: took 401.279304ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.088058   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.283083   34720 request.go:632] Waited for 194.957282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283183   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283198   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.283211   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.283221   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.288754   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:33:01.482812   34720 request.go:632] Waited for 193.21938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482876   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482883   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.482895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.482906   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.487184   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.488013   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.488035   34720 pod_ready.go:82] duration metric: took 399.968755ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.488047   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.682796   34720 request.go:632] Waited for 194.675415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682878   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682885   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.682895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.682903   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.687354   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.883473   34720 request.go:632] Waited for 195.37133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883544   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883551   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.883560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.883565   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.887254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.887998   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.888020   34720 pod_ready.go:82] duration metric: took 399.964872ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.888033   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.082969   34720 request.go:632] Waited for 194.870325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083045   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083051   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.083059   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.083071   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.087791   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.283169   34720 request.go:632] Waited for 194.361368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283289   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283304   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.283331   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.283350   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.289541   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:02.290706   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:02.290729   34720 pod_ready.go:82] duration metric: took 402.687198ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.290741   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.483158   34720 request.go:632] Waited for 192.351675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483216   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483222   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.483229   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.483233   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.487135   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:02.683325   34720 request.go:632] Waited for 195.063306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683451   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683485   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.683516   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.683525   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.687678   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.883237   34720 request.go:632] Waited for 92.265907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883323   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883335   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.883343   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.883351   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.887580   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.082785   34720 request.go:632] Waited for 194.294379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082857   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082862   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.082872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.082876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.086700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.291740   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:03.291767   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.291777   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.291783   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.295392   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.483576   34720 request.go:632] Waited for 187.437599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483647   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483655   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.483667   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.483677   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.487588   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.488048   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.488067   34720 pod_ready.go:82] duration metric: took 1.197317957s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.488076   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.683488   34720 request.go:632] Waited for 195.341906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.683590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.683597   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.687625   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.882797   34720 request.go:632] Waited for 194.279012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882884   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882896   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.882906   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.882924   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.886967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.887827   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.887857   34720 pod_ready.go:82] duration metric: took 399.773896ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.887870   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.082926   34720 request.go:632] Waited for 194.972094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083025   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.083037   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.083041   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.087402   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.283534   34720 request.go:632] Waited for 194.922082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283619   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.283626   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.283630   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.287420   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:04.288067   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.288124   34720 pod_ready.go:82] duration metric: took 400.245815ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.288141   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.483212   34720 request.go:632] Waited for 194.995215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483277   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483290   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.483319   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.483325   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.487831   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.682773   34720 request.go:632] Waited for 194.183233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682843   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.682854   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.682858   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.686967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.687793   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.687819   34720 pod_ready.go:82] duration metric: took 399.669055ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.687836   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.882848   34720 request.go:632] Waited for 194.931159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882922   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882930   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.882942   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.882951   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.886911   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.083280   34720 request.go:632] Waited for 195.375329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083376   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083387   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.083398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.083407   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.086880   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.087419   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.087441   34720 pod_ready.go:82] duration metric: took 399.596031ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.087453   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.282500   34720 request.go:632] Waited for 194.956546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282556   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282561   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.282568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.282582   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.285978   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.482968   34720 request.go:632] Waited for 196.156247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483125   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483139   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.483149   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.483155   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.489591   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:05.490240   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.490263   34720 pod_ready.go:82] duration metric: took 402.801252ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.490276   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.683160   34720 request.go:632] Waited for 192.80812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683317   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683345   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.683360   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.683366   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.687330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.883447   34720 request.go:632] Waited for 195.335552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883530   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.883545   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.883553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.887272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.888002   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.888020   34720 pod_ready.go:82] duration metric: took 397.737135ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.888031   34720 pod_ready.go:39] duration metric: took 6.401673703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:33:05.888048   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:33:05.888099   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:33:05.905331   34720 system_svc.go:56] duration metric: took 17.278667ms WaitForService to wait for kubelet
	I0930 11:33:05.905363   34720 kubeadm.go:582] duration metric: took 7.126999309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:33:05.905382   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:33:06.082680   34720 request.go:632] Waited for 177.227376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082733   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082739   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:06.082746   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:06.082751   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:06.087224   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:06.088896   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088918   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088929   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088932   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088935   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088939   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088942   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088945   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088948   34720 node_conditions.go:105] duration metric: took 183.562454ms to run NodePressure ...
	I0930 11:33:06.088959   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:33:06.088977   34720 start.go:255] writing updated cluster config ...
	I0930 11:33:06.089268   34720 ssh_runner.go:195] Run: rm -f paused
	I0930 11:33:06.143377   34720 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:33:06.145486   34720 out.go:177] * Done! kubectl is now configured to use "ha-033260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.807083442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695991807044333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fa7fac4-5520-490d-be05-c42715442448 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.807615024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6227a33-05f7-4377-a067-2aec06d11df0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.807688514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6227a33-05f7-4377-a067-2aec06d11df0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.807991526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6227a33-05f7-4377-a067-2aec06d11df0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.847957772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce27cc67-40ed-4a13-8d9a-d5e563fe6b87 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.848030587Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce27cc67-40ed-4a13-8d9a-d5e563fe6b87 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.849180397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5da3c433-a702-4a08-b3a2-1d766968a322 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.849759281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695991849732051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5da3c433-a702-4a08-b3a2-1d766968a322 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.850230066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de0ac9f1-b44a-4746-bc54-c0a086c95226 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.850378535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de0ac9f1-b44a-4746-bc54-c0a086c95226 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.850753317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de0ac9f1-b44a-4746-bc54-c0a086c95226 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.894446707Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d7918c7-173b-4725-bca3-a916384be9a4 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.894521351Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d7918c7-173b-4725-bca3-a916384be9a4 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.896036263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ea1da87-b72a-4c1a-aac1-e1ba44f9817d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.896633933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695991896607011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ea1da87-b72a-4c1a-aac1-e1ba44f9817d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.897546352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43e17ded-1abf-4340-85fb-f8fac06d7ced name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.897627589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43e17ded-1abf-4340-85fb-f8fac06d7ced name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.897972086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43e17ded-1abf-4340-85fb-f8fac06d7ced name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.946188890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d2d56eea-030c-406d-be95-b22df01953a5 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.946584769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2d56eea-030c-406d-be95-b22df01953a5 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.951586748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9a5bb78-ff21-42f5-acfe-b3db8cc52a08 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.952049344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695991952017765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9a5bb78-ff21-42f5-acfe-b3db8cc52a08 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.954025962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec187a92-aa0e-4285-8d33-4c9a15b6f938 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.954237486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec187a92-aa0e-4285-8d33-4c9a15b6f938 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:33:11 ha-033260 crio[1037]: time="2024-09-30 11:33:11.955176636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec187a92-aa0e-4285-8d33-4c9a15b6f938 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	88e9d994261ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   d40067a91d083       storage-provisioner
	df3f12d455b8e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   2                   80de34a6f14ca       busybox-7dff88458-nbhwc
	1937cce4ac070       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               2                   40863d7ac6437       kindnet-g94k6
	447147b39349f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                2                   96e86b12ad9b7       kube-proxy-mxvxr
	d33c75c18e088       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   74bab7f17b06b       coredns-7c65d6cfc9-kt87v
	88e2f3c9b905b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   2                   f6863e18fb197       coredns-7c65d6cfc9-5frmm
	f4c792280b15b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       4                   d40067a91d083       storage-provisioner
	487866f095e01       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   4                   1eee82fccc84c       kube-controller-manager-ha-033260
	6ea8bba210502       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            4                   498808de72075       kube-apiserver-ha-033260
	bf743c3bfec10       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     2 minutes ago        Running             kube-vip                  1                   bfb2a9b6e2e5a       kube-vip-ha-033260
	91514ddf1467c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            3                   498808de72075       kube-apiserver-ha-033260
	b2e1a261e4464       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      2                   5d3f45272bb02       etcd-ha-033260
	fd2ffaa7ff33d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            2                   aeafc6ee55a4d       kube-scheduler-ha-033260
	9f9c8e0b4eb8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   3                   1eee82fccc84c       kube-controller-manager-ha-033260
	
	
	==> coredns [88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60977 - 56023 "HINFO IN 6022066924044087929.8494370084378227503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030589997s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1363673838]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.175) (total time: 30002ms):
	Trace[1363673838]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:31:59.176)
	Trace[1363673838]: [30.00230997s] [30.00230997s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1452341617]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30003ms):
	Trace[1452341617]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1452341617]: [30.0032564s] [30.0032564s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1546520065]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1546520065]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1546520065]: [30.002775951s] [30.002775951s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44743 - 60294 "HINFO IN 2203689339262482561.411210931008286347. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030703121s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469308931]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[469308931]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.176)
	Trace[469308931]: [30.002568999s] [30.002568999s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1100740362]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1100740362]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1100740362]: [30.002476509s] [30.002476509s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1653957079]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.176) (total time: 30002ms):
	Trace[1653957079]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.178)
	Trace[1653957079]: [30.002259084s] [30.002259084s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-033260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:33:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:31:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-033260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 285e64dc8d10442694303513a400e333
	  System UUID:                285e64dc-8d10-4426-9430-3513a400e333
	  Boot ID:                    819b9c53-0125-4e30-b11d-f0c734cdb490
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbhwc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-5frmm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-7c65d6cfc9-kt87v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-ha-033260                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-g94k6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-033260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-033260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-mxvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-033260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-033260                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  Starting                 21m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  21m                    kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                    kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m                    kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  NodeReady                20m                    kubelet          Node ha-033260 status is now: NodeReady
	  Normal  RegisteredNode           20m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m34s (x7 over 2m34s)  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                   node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           48s                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	
	
	Name:               ha-033260-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:12:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:33:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-033260-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1504aa96b0e7414e83ec57ce754ea274
	  System UUID:                1504aa96-b0e7-414e-83ec-57ce754ea274
	  Boot ID:                    c982302c-6e81-49de-9ba4-9fad6b0527be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-748nr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-033260-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-752cm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-033260-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-033260-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-fckwn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-033260-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-033260-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 20m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  20m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           20m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  NodeNotReady             16m                    node-controller  Node ha-033260-m02 status is now: NodeNotReady
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m11s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m11s)  kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m11s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           108s                   node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           48s                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	
	
	Name:               ha-033260-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:33:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-033260-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 581b37e2b76245bf813ddd1801a6b9a3
	  System UUID:                581b37e2-b762-45bf-813d-dd1801a6b9a3
	  Boot ID:                    0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkczc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-033260-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kindnet-4rpgw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-033260-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-ha-033260-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-fctld                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-033260-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-vip-ha-033260-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           18m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           108s               node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  68s                kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s                kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s                kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 68s                kubelet          Node ha-033260-m03 has been rebooted, boot id: 0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Normal   RegisteredNode           48s                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	
	
	Name:               ha-033260-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:33:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:59 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-033260-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f7e5ab5969e49808de6a4938b82b604
	  System UUID:                3f7e5ab5-969e-4980-8de6-a4938b82b604
	  Boot ID:                    5c8fe13a-3363-443e-bb87-2dda804740af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kb2cp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-proxy-cr58q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9s                 kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeReady                17m                kubelet          Node ha-033260-m04 status is now: NodeReady
	  Normal   RegisteredNode           108s               node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeNotReady             68s                node-controller  Node ha-033260-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           48s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13s (x2 over 13s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x2 over 13s)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x2 over 13s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 13s                kubelet          Node ha-033260-m04 has been rebooted, boot id: 5c8fe13a-3363-443e-bb87-2dda804740af
	  Normal   NodeReady                13s                kubelet          Node ha-033260-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep30 11:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051485] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.894871] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.799819] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.926902] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.063947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060890] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	[  +0.189706] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.143881] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.315063] systemd-fstab-generator[1028]: Ignoring "noauto" option for root device
	[  +4.231701] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.066662] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.898522] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.432816] kauditd_printk_skb: 40 callbacks suppressed
	[Sep30 11:31] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199] <==
	{"level":"warn","ts":"2024-09-30T11:32:00.706787Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:00.706927Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:02.199856Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:02.200043Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:05.707026Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:05.707078Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:06.201787Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:06.201855Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-30T11:32:09.282799Z","caller":"traceutil/trace.go:171","msg":"trace[64857037] transaction","detail":"{read_only:false; response_revision:2187; number_of_response:1; }","duration":"116.655315ms","start":"2024-09-30T11:32:09.166129Z","end":"2024-09-30T11:32:09.282785Z","steps":["trace[64857037] 'process raft request'  (duration: 116.522686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:32:10.203832Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:10.203980Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:10.707792Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:10.707849Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-30T11:32:13.224903Z","caller":"traceutil/trace.go:171","msg":"trace[1999434344] transaction","detail":"{read_only:false; response_revision:2202; number_of_response:1; }","duration":"128.3691ms","start":"2024-09-30T11:32:13.096517Z","end":"2024-09-30T11:32:13.224886Z","steps":["trace[1999434344] 'process raft request'  (duration: 128.283973ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:32:14.206454Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.238:2380/version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:14.206583Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ff39ee5ac13ccc82","error":"Get \"https://192.168.39.238:2380/version\": dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:15.708904Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-30T11:32:15.708958Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ff39ee5ac13ccc82","rtt":"0s","error":"dial tcp 192.168.39.238:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-30T11:32:17.251669Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.251722Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.287868Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.302877Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"ff39ee5ac13ccc82","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-30T11:32:17.302984Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	{"level":"info","ts":"2024-09-30T11:32:17.303315Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"ff39ee5ac13ccc82","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-30T11:32:17.303445Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"ff39ee5ac13ccc82"}
	
	
	==> kernel <==
	 11:33:12 up 2 min,  0 users,  load average: 0.32, 0.28, 0.12
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe] <==
	I0930 11:32:40.500194       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:32:50.507743       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:32:50.507860       1 main.go:299] handling current node
	I0930 11:32:50.507881       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:32:50.507891       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:32:50.508455       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:32:50.508496       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:32:50.508617       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:32:50.508652       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:33:00.499131       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:33:00.499232       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:33:00.499532       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:33:00.499622       1 main.go:299] handling current node
	I0930 11:33:00.499648       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:33:00.499654       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:33:00.499852       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:33:00.499936       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:33:10.504423       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:33:10.504559       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:33:10.504701       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:33:10.504826       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:33:10.504924       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:33:10.504945       1 main.go:299] handling current node
	I0930 11:33:10.504966       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:33:10.504982       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c] <==
	I0930 11:31:21.381575       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0930 11:31:21.538562       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 11:31:21.543182       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:31:21.543721       1 policy_source.go:224] refreshing policies
	I0930 11:31:21.579575       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:31:21.579665       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 11:31:21.580585       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 11:31:21.581145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 11:31:21.581189       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 11:31:21.579601       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 11:31:21.579657       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 11:31:21.581999       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 11:31:21.582037       1 aggregator.go:171] initial CRD sync complete...
	I0930 11:31:21.582044       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 11:31:21.582048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 11:31:21.582053       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:31:21.586437       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0930 11:31:21.607643       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238]
	I0930 11:31:21.609050       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 11:31:21.622457       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0930 11:31:21.631794       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0930 11:31:21.643397       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:31:22.390935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0930 11:31:22.949170       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.249]
	W0930 11:31:42.954664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249 192.168.39.3]
	
	
	==> kube-apiserver [91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1] <==
	I0930 11:30:45.187556       1 options.go:228] external host was not specified, using 192.168.39.249
	I0930 11:30:45.195121       1 server.go:142] Version: v1.31.1
	I0930 11:30:45.195252       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.676469       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 11:30:46.702385       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:30:46.710100       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 11:30:46.716179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 11:30:46.716589       1 instance.go:232] Using reconciler: lease
	W0930 11:31:06.661936       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.662284       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.717971       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0930 11:31:06.718008       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a] <==
	I0930 11:31:33.688278       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:31:45.516462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260"
	I0930 11:32:04.057525       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m02"
	I0930 11:32:04.231728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m03"
	I0930 11:32:04.684993       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:04.715867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:05.230878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.005408ms"
	I0930 11:32:05.231081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="127.153µs"
	I0930 11:32:05.557209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:08.338134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.292563ms"
	I0930 11:32:08.339116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.778µs"
	I0930 11:32:08.388088       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.933664ms"
	I0930 11:32:08.388698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.661µs"
	I0930 11:32:08.496117       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.972779ms"
	I0930 11:32:08.496306       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="89.205µs"
	I0930 11:32:09.843317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:21.311773       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.445217ms"
	I0930 11:32:21.312598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="122.676µs"
	I0930 11:32:24.622549       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:24.711638       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:34.663019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m03"
	I0930 11:32:59.222283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:59.222671       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:32:59.248647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:59.651723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	
	
	==> kube-controller-manager [9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438] <==
	I0930 11:30:45.993698       1 serving.go:386] Generated self-signed cert in-memory
	I0930 11:30:46.957209       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 11:30:46.957296       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.962662       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 11:30:46.963278       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:30:46.963571       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:30:46.963743       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0930 11:31:21.471526       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:31:29.611028       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:31:29.650081       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0930 11:31:29.650432       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:31:29.730719       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:31:29.730781       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:31:29.730811       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:31:29.734900       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:31:29.735864       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:31:29.735899       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:31:29.738688       1 config.go:199] "Starting service config controller"
	I0930 11:31:29.738986       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:31:29.739407       1 config.go:328] "Starting node config controller"
	I0930 11:31:29.739433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:31:29.739913       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:31:29.743750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:31:29.743822       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:31:29.840409       1 shared_informer.go:320] Caches are synced for node config
	I0930 11:31:29.840462       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40] <==
	E0930 11:31:21.474807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.474916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 11:31:21.474948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 11:31:21.475069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 11:31:21.475172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475256       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 11:31:21.475756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0930 11:31:21.475283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475433       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 11:31:21.476052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 11:31:21.476242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 11:31:21.476437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 11:31:21.476777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 11:31:21.478491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.475680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 11:31:21.478709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:31:21.480661       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 11:31:21.480791       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 11:31:23.035263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 11:31:38 ha-033260 kubelet[1140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:31:48 ha-033260 kubelet[1140]: E0930 11:31:48.042221    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695908040568557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:48 ha-033260 kubelet[1140]: E0930 11:31:48.042430    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695908040568557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:58 ha-033260 kubelet[1140]: E0930 11:31:58.049371    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695918048636085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:58 ha-033260 kubelet[1140]: E0930 11:31:58.049438    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695918048636085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:31:59 ha-033260 kubelet[1140]: I0930 11:31:59.442892    1140 scope.go:117] "RemoveContainer" containerID="f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519"
	Sep 30 11:32:08 ha-033260 kubelet[1140]: E0930 11:32:08.056080    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695928055647323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:08 ha-033260 kubelet[1140]: E0930 11:32:08.056129    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695928055647323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:18 ha-033260 kubelet[1140]: E0930 11:32:18.057838    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695938057305846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:18 ha-033260 kubelet[1140]: E0930 11:32:18.058200    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695938057305846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:28 ha-033260 kubelet[1140]: E0930 11:32:28.061267    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695948060750230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:28 ha-033260 kubelet[1140]: E0930 11:32:28.061868    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695948060750230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:38 ha-033260 kubelet[1140]: E0930 11:32:38.065557    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695958063255462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:38 ha-033260 kubelet[1140]: E0930 11:32:38.065622    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695958063255462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:38 ha-033260 kubelet[1140]: E0930 11:32:38.068753    1140 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:32:38 ha-033260 kubelet[1140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:32:38 ha-033260 kubelet[1140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:32:38 ha-033260 kubelet[1140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:32:38 ha-033260 kubelet[1140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:32:48 ha-033260 kubelet[1140]: E0930 11:32:48.067027    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695968066700890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:48 ha-033260 kubelet[1140]: E0930 11:32:48.067068    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695968066700890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:58 ha-033260 kubelet[1140]: E0930 11:32:58.071041    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695978069834586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:58 ha-033260 kubelet[1140]: E0930 11:32:58.071099    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695978069834586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:08 ha-033260 kubelet[1140]: E0930 11:33:08.077457    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988076772880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:08 ha-033260 kubelet[1140]: E0930 11:33:08.077518    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988076772880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:261: (dbg) Run:  kubectl --context ha-033260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-033260 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-033260 --control-plane -v=7 --alsologtostderr: (1m18.964857682s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr: (1.157353119s)
ha_test.go:616: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-033260-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:619: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-033260-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:622: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-033260-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:625: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr": ha-033260
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-033260-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-033260-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.898458368s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260 -v=7                                                           | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-033260 -v=7                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	| node    | ha-033260 node delete m03 -v=7                                                   | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-033260 stop -v=7                                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true                                                         | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:25 UTC | 30 Sep 24 11:33 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	| node    | add -p ha-033260                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:33 UTC | 30 Sep 24 11:34 UTC |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:25:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:25:23.307171   34720 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:25:23.307438   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307448   34720 out.go:358] Setting ErrFile to fd 2...
	I0930 11:25:23.307454   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307638   34720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:25:23.308189   34720 out.go:352] Setting JSON to false
	I0930 11:25:23.309088   34720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4070,"bootTime":1727691453,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:25:23.309188   34720 start.go:139] virtualization: kvm guest
	I0930 11:25:23.312163   34720 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:25:23.313387   34720 notify.go:220] Checking for updates...
	I0930 11:25:23.313393   34720 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:25:23.314778   34720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:25:23.316338   34720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:25:23.317962   34720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:25:23.319385   34720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:25:23.320813   34720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:25:23.322948   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:25:23.323340   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.323412   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.338759   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41721
	I0930 11:25:23.339192   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.339786   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.339807   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.340136   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.340346   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.340572   34720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:25:23.340857   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.340891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.355777   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
	I0930 11:25:23.356254   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.356744   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.356763   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.357120   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.357292   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.393653   34720 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:25:23.394968   34720 start.go:297] selected driver: kvm2
	I0930 11:25:23.394986   34720 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false
efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.395148   34720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:25:23.395486   34720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.395574   34720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:25:23.411100   34720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:25:23.411834   34720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:25:23.411865   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:25:23.411907   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:25:23.411964   34720 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.412098   34720 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.413851   34720 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:25:23.415381   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:25:23.415422   34720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:25:23.415429   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:25:23.415534   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:25:23.415546   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:25:23.415667   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:25:23.415859   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:25:23.415901   34720 start.go:364] duration metric: took 23.767µs to acquireMachinesLock for "ha-033260"
	I0930 11:25:23.415913   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:25:23.415920   34720 fix.go:54] fixHost starting: 
	I0930 11:25:23.416165   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.416196   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.430823   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I0930 11:25:23.431277   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.431704   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.431723   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.432018   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.432228   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.432375   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:25:23.433975   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:25:23.434007   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:25:23.436150   34720 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:25:23.437473   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:25:23.437494   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.437753   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:25:23.440392   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.440831   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:25:23.440858   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.441041   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:25:23.441214   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441380   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441502   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:25:23.441655   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:25:23.441833   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:25:23.441844   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:25:26.337999   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:29.409914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:35.489955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:38.561928   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:44.641887   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:47.713916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:53.793988   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:56.865946   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:10.017864   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:16.097850   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:19.169940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:25.249934   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:28.321888   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:34.401910   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:37.473948   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:43.553872   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:46.625911   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:52.705908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:55.777884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:01.857921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:04.929922   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:11.009956   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:14.081936   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:20.161884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:23.233917   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:29.313903   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:32.385985   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:38.465815   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:41.537920   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:47.617898   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:50.689890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:56.769908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:59.841901   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:05.921893   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:08.993941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:15.073913   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:18.145943   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:24.225916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:27.297994   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:33.377803   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:36.449892   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:42.529904   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:45.601915   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:51.681921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:54.753890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:00.833932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:03.905924   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:09.985909   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:13.057955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:19.137932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:22.209941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:28.289972   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:31.361973   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:37.441940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:40.513906   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:46.593938   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:49.665931   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:55.745914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:58.817932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:04.897939   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:07.900098   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:07.900146   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900476   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:07.900498   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900690   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:07.902604   34720 machine.go:96] duration metric: took 4m44.465113929s to provisionDockerMachine
	I0930 11:30:07.902642   34720 fix.go:56] duration metric: took 4m44.486721557s for fixHost
	I0930 11:30:07.902649   34720 start.go:83] releasing machines lock for "ha-033260", held for 4m44.486740655s
	W0930 11:30:07.902664   34720 start.go:714] error starting host: provision: host is not running
	W0930 11:30:07.902739   34720 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 11:30:07.902751   34720 start.go:729] Will try again in 5 seconds ...
	I0930 11:30:12.906532   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:12.906673   34720 start.go:364] duration metric: took 71.92µs to acquireMachinesLock for "ha-033260"
	I0930 11:30:12.906700   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:12.906710   34720 fix.go:54] fixHost starting: 
	I0930 11:30:12.906980   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:12.907012   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:12.922017   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0930 11:30:12.922407   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:12.922875   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:12.922898   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:12.923192   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:12.923373   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:12.923532   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:30:12.925123   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Stopped err=<nil>
	I0930 11:30:12.925146   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	W0930 11:30:12.925301   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:12.927074   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260" ...
	I0930 11:30:12.928250   34720 main.go:141] libmachine: (ha-033260) Calling .Start
	I0930 11:30:12.928414   34720 main.go:141] libmachine: (ha-033260) Ensuring networks are active...
	I0930 11:30:12.929185   34720 main.go:141] libmachine: (ha-033260) Ensuring network default is active
	I0930 11:30:12.929536   34720 main.go:141] libmachine: (ha-033260) Ensuring network mk-ha-033260 is active
	I0930 11:30:12.929877   34720 main.go:141] libmachine: (ha-033260) Getting domain xml...
	I0930 11:30:12.930569   34720 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:30:14.153271   34720 main.go:141] libmachine: (ha-033260) Waiting to get IP...
	I0930 11:30:14.154287   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.154676   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.154756   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.154665   35728 retry.go:31] will retry after 246.651231ms: waiting for machine to come up
	I0930 11:30:14.403231   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.403674   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.403727   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.403659   35728 retry.go:31] will retry after 262.960523ms: waiting for machine to come up
	I0930 11:30:14.668247   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.668711   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.668739   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.668675   35728 retry.go:31] will retry after 381.564783ms: waiting for machine to come up
	I0930 11:30:15.052320   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.052821   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.052846   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.052760   35728 retry.go:31] will retry after 588.393032ms: waiting for machine to come up
	I0930 11:30:15.642361   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.642772   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.642801   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.642723   35728 retry.go:31] will retry after 588.302425ms: waiting for machine to come up
	I0930 11:30:16.232721   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:16.233152   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:16.233171   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:16.233111   35728 retry.go:31] will retry after 770.742378ms: waiting for machine to come up
	I0930 11:30:17.005248   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:17.005687   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:17.005718   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:17.005645   35728 retry.go:31] will retry after 1.118737809s: waiting for machine to come up
	I0930 11:30:18.126316   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:18.126728   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:18.126755   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:18.126678   35728 retry.go:31] will retry after 1.317343847s: waiting for machine to come up
	I0930 11:30:19.446227   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:19.446785   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:19.446810   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:19.446709   35728 retry.go:31] will retry after 1.309700527s: waiting for machine to come up
	I0930 11:30:20.758241   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:20.758680   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:20.758702   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:20.758651   35728 retry.go:31] will retry after 1.521862953s: waiting for machine to come up
	I0930 11:30:22.282731   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:22.283205   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:22.283242   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:22.283159   35728 retry.go:31] will retry after 2.906878377s: waiting for machine to come up
	I0930 11:30:25.192687   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:25.193133   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:25.193170   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:25.193111   35728 retry.go:31] will retry after 2.807596314s: waiting for machine to come up
	I0930 11:30:28.002489   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:28.002972   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:28.003005   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:28.002951   35728 retry.go:31] will retry after 2.762675727s: waiting for machine to come up
	I0930 11:30:30.769002   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.769600   34720 main.go:141] libmachine: (ha-033260) Found IP for machine: 192.168.39.249
	I0930 11:30:30.769647   34720 main.go:141] libmachine: (ha-033260) Reserving static IP address...
	I0930 11:30:30.769660   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has current primary IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.770061   34720 main.go:141] libmachine: (ha-033260) Reserved static IP address: 192.168.39.249
	I0930 11:30:30.770097   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.770113   34720 main.go:141] libmachine: (ha-033260) Waiting for SSH to be available...
	I0930 11:30:30.770138   34720 main.go:141] libmachine: (ha-033260) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"}
	I0930 11:30:30.770150   34720 main.go:141] libmachine: (ha-033260) DBG | Getting to WaitForSSH function...
	I0930 11:30:30.772370   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.772760   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772873   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH client type: external
	I0930 11:30:30.772897   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa (-rw-------)
	I0930 11:30:30.772957   34720 main.go:141] libmachine: (ha-033260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:30.772978   34720 main.go:141] libmachine: (ha-033260) DBG | About to run SSH command:
	I0930 11:30:30.772991   34720 main.go:141] libmachine: (ha-033260) DBG | exit 0
	I0930 11:30:30.902261   34720 main.go:141] libmachine: (ha-033260) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:30.902682   34720 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:30:30.903345   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:30.905986   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906435   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.906466   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906792   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:30.907003   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:30.907027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:30.907234   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:30.909474   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.909877   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.909908   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.910031   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:30.910192   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910303   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910430   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:30.910552   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:30.910754   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:30.910767   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:31.026522   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:31.026555   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.026772   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:31.026799   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.027007   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.029600   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.029965   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.029992   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.030147   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.030327   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030457   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030592   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.030726   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.030900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.030913   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:30:31.158417   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:30:31.158470   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.161439   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.161861   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.161898   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.162135   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.162317   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162476   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162595   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.162742   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.162897   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.162912   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:31.283806   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:31.283837   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:31.283864   34720 buildroot.go:174] setting up certificates
	I0930 11:30:31.283877   34720 provision.go:84] configureAuth start
	I0930 11:30:31.283888   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.284156   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:31.287095   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287561   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.287586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287860   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.290260   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290610   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.290638   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290768   34720 provision.go:143] copyHostCerts
	I0930 11:30:31.290802   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290847   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:31.290855   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290923   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:31.291012   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291029   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:31.291036   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291062   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:31.291116   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291138   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:31.291144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291169   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:31.291235   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:30:31.357378   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:31.357434   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:31.357461   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.360265   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360612   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.360639   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360895   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.361087   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.361219   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.361344   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.448948   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:31.449019   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:31.478937   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:31.479012   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:30:31.509585   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:31.509668   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:31.539539   34720 provision.go:87] duration metric: took 255.649967ms to configureAuth
	I0930 11:30:31.539565   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:31.539759   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:31.539826   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.542626   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543038   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.543072   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543261   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.543501   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543644   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543761   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.543949   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.544136   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.544151   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:31.800600   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:31.800624   34720 machine.go:96] duration metric: took 893.601125ms to provisionDockerMachine
	I0930 11:30:31.800638   34720 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:30:31.800650   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:31.800670   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.801007   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:31.801030   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.803813   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804193   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.804222   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804441   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.804604   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.804769   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.804939   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.893164   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:31.898324   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:31.898349   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:31.898488   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:31.898642   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:31.898657   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:31.898771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:31.909611   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:31.940213   34720 start.go:296] duration metric: took 139.562436ms for postStartSetup
	I0930 11:30:31.940253   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.940567   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:31.940600   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.943464   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.943880   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.943909   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.944048   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.944346   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.944569   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.944768   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.028986   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:32.029069   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:32.087362   34720 fix.go:56] duration metric: took 19.180639105s for fixHost
	I0930 11:30:32.087405   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.090539   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.090962   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.090988   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.091151   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.091371   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091585   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091707   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.091851   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:32.092025   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:32.092044   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:32.206950   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695832.171402259
	
	I0930 11:30:32.206975   34720 fix.go:216] guest clock: 1727695832.171402259
	I0930 11:30:32.206982   34720 fix.go:229] Guest: 2024-09-30 11:30:32.171402259 +0000 UTC Remote: 2024-09-30 11:30:32.087388641 +0000 UTC m=+308.814519334 (delta=84.013618ms)
	I0930 11:30:32.207008   34720 fix.go:200] guest clock delta is within tolerance: 84.013618ms
	I0930 11:30:32.207014   34720 start.go:83] releasing machines lock for "ha-033260", held for 19.300329364s
	I0930 11:30:32.207037   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.207322   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:32.209968   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210394   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.210419   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210638   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211106   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211267   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211375   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:32.211419   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.211462   34720 ssh_runner.go:195] Run: cat /version.json
	I0930 11:30:32.211487   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.213826   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214176   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214200   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214221   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214463   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.214607   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.214713   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.214734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214757   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214877   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.214902   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.215061   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.215198   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.215320   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.318873   34720 ssh_runner.go:195] Run: systemctl --version
	I0930 11:30:32.325516   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:32.483433   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:32.489924   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:32.489999   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:32.509691   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:32.509716   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:32.509773   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:32.529220   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:32.544880   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:32.544953   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:32.561347   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:32.576185   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:32.696192   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:32.856000   34720 docker.go:233] disabling docker service ...
	I0930 11:30:32.856061   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:32.872115   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:32.886462   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:33.019718   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:33.149810   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:33.165943   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:33.188911   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:33.188984   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.202121   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:33.202191   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.214960   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.227336   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.239366   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:33.251818   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.264121   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.285246   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.297242   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:30:33.307951   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:30:33.308020   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:30:33.324031   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:30:33.335459   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:33.464418   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:30:33.563219   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:30:33.563313   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:30:33.568915   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:30:33.568982   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:30:33.575600   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:30:33.617027   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:30:33.617123   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.651093   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.682607   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:30:33.684108   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:33.687198   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687568   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:33.687586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687860   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:30:33.692422   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:33.706358   34720 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:30:33.706513   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:33.706553   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:33.741648   34720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 11:30:33.741712   34720 ssh_runner.go:195] Run: which lz4
	I0930 11:30:33.746514   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 11:30:33.746605   34720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:30:33.751033   34720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:30:33.751094   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 11:30:35.211096   34720 crio.go:462] duration metric: took 1.464517464s to copy over tarball
	I0930 11:30:35.211178   34720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:30:37.290495   34720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.079293521s)
	I0930 11:30:37.290519   34720 crio.go:469] duration metric: took 2.079396835s to extract the tarball
	I0930 11:30:37.290526   34720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:30:37.328103   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:37.375779   34720 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:30:37.375803   34720 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:30:37.375810   34720 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:30:37.375919   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:30:37.376009   34720 ssh_runner.go:195] Run: crio config
	I0930 11:30:37.430483   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:30:37.430505   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:30:37.430513   34720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:30:37.430534   34720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:30:37.430658   34720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:30:37.430678   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:30:37.430719   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:30:37.447824   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:30:37.447927   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:30:37.447977   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:30:37.458530   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:30:37.458608   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:30:37.469126   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:30:37.487666   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:30:37.505980   34720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:30:37.524942   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:30:37.543099   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:30:37.547174   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:37.560565   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:37.703633   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:30:37.722433   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:30:37.722455   34720 certs.go:194] generating shared ca certs ...
	I0930 11:30:37.722471   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:37.722631   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:30:37.722669   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:30:37.722678   34720 certs.go:256] generating profile certs ...
	I0930 11:30:37.722756   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:30:37.722813   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:30:37.722850   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:30:37.722861   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:30:37.722873   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:30:37.722886   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:30:37.722898   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:30:37.722909   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:30:37.722931   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:30:37.722944   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:30:37.722956   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:30:37.723015   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:30:37.723047   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:30:37.723058   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:30:37.723082   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:30:37.723107   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:30:37.723127   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:30:37.723167   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:37.723194   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:37.723207   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:30:37.723219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:30:37.723778   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:30:37.765086   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:30:37.796973   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:30:37.825059   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:30:37.855521   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:30:37.899131   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:30:37.930900   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:30:37.980558   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:30:38.038804   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:30:38.087704   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:30:38.115070   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:30:38.143055   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:30:38.165228   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:30:38.181120   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:30:38.193472   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199554   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199622   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.206544   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:30:38.218674   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:30:38.230696   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235800   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235869   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.242027   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:30:38.253962   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:30:38.265695   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270860   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270930   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.277134   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:30:38.288946   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:30:38.294078   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:30:38.300823   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:30:38.307442   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:30:38.314085   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:30:38.320482   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:30:38.327174   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:30:38.333995   34720 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:30:38.334150   34720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:30:38.334251   34720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:30:38.372351   34720 cri.go:89] found id: ""
	I0930 11:30:38.372413   34720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:30:38.383026   34720 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 11:30:38.383043   34720 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 11:30:38.383100   34720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 11:30:38.394015   34720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:30:38.394528   34720 kubeconfig.go:125] found "ha-033260" server: "https://192.168.39.254:8443"
	I0930 11:30:38.394558   34720 kubeconfig.go:47] verify endpoint returned: got: 192.168.39.254:8443, want: 192.168.39.249:8443
	I0930 11:30:38.394772   34720 kubeconfig.go:62] /home/jenkins/minikube-integration/19734-3842/kubeconfig needs updating (will repair): [kubeconfig needs server address update]
	I0930 11:30:38.395022   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.395487   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.395704   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:30:38.396149   34720 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 11:30:38.396377   34720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 11:30:38.407784   34720 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0930 11:30:38.407813   34720 kubeadm.go:597] duration metric: took 24.764144ms to restartPrimaryControlPlane
	I0930 11:30:38.407821   34720 kubeadm.go:394] duration metric: took 73.840194ms to StartCluster
	I0930 11:30:38.407838   34720 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.407924   34720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.408750   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.409039   34720 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:30:38.409099   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:30:38.409119   34720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:30:38.409305   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.411175   34720 out.go:177] * Enabled addons: 
	I0930 11:30:38.412776   34720 addons.go:510] duration metric: took 3.663147ms for enable addons: enabled=[]
	I0930 11:30:38.412820   34720 start.go:246] waiting for cluster config update ...
	I0930 11:30:38.412828   34720 start.go:255] writing updated cluster config ...
	I0930 11:30:38.414670   34720 out.go:201] 
	I0930 11:30:38.416408   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.416501   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.418474   34720 out.go:177] * Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	I0930 11:30:38.419875   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:38.419902   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:30:38.420019   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:30:38.420031   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:30:38.420138   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.420331   34720 start.go:360] acquireMachinesLock for ha-033260-m02: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:38.420373   34720 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "ha-033260-m02"
	I0930 11:30:38.420384   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:38.420389   34720 fix.go:54] fixHost starting: m02
	I0930 11:30:38.420682   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:38.420704   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:38.436048   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0930 11:30:38.436591   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:38.437106   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:38.437129   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:38.437434   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:38.437608   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:38.437762   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:30:38.439609   34720 fix.go:112] recreateIfNeeded on ha-033260-m02: state=Stopped err=<nil>
	I0930 11:30:38.439637   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	W0930 11:30:38.439785   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:38.443504   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m02" ...
	I0930 11:30:38.445135   34720 main.go:141] libmachine: (ha-033260-m02) Calling .Start
	I0930 11:30:38.445476   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring networks are active...
	I0930 11:30:38.446588   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network default is active
	I0930 11:30:38.447039   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network mk-ha-033260 is active
	I0930 11:30:38.447376   34720 main.go:141] libmachine: (ha-033260-m02) Getting domain xml...
	I0930 11:30:38.448426   34720 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:30:39.710879   34720 main.go:141] libmachine: (ha-033260-m02) Waiting to get IP...
	I0930 11:30:39.711874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.712365   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.712441   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.712367   35943 retry.go:31] will retry after 217.001727ms: waiting for machine to come up
	I0930 11:30:39.931176   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.931746   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.931795   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.931690   35943 retry.go:31] will retry after 360.379717ms: waiting for machine to come up
	I0930 11:30:40.293305   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.293927   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.293956   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.293884   35943 retry.go:31] will retry after 440.189289ms: waiting for machine to come up
	I0930 11:30:40.735666   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.736141   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.736171   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.736077   35943 retry.go:31] will retry after 458.690004ms: waiting for machine to come up
	I0930 11:30:41.196951   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.197392   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.197421   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.197336   35943 retry.go:31] will retry after 554.052986ms: waiting for machine to come up
	I0930 11:30:41.753199   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.753680   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.753707   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.753643   35943 retry.go:31] will retry after 931.699083ms: waiting for machine to come up
	I0930 11:30:42.686931   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:42.687320   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:42.687351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:42.687256   35943 retry.go:31] will retry after 1.166098452s: waiting for machine to come up
	I0930 11:30:43.855595   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:43.856179   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:43.856196   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:43.856132   35943 retry.go:31] will retry after 902.212274ms: waiting for machine to come up
	I0930 11:30:44.759588   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:44.760139   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:44.760169   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:44.760094   35943 retry.go:31] will retry after 1.732785907s: waiting for machine to come up
	I0930 11:30:46.495220   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:46.495722   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:46.495751   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:46.495670   35943 retry.go:31] will retry after 1.455893126s: waiting for machine to come up
	I0930 11:30:47.952835   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:47.953164   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:47.953186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:47.953117   35943 retry.go:31] will retry after 1.846394006s: waiting for machine to come up
	I0930 11:30:49.801836   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:49.802224   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:49.802255   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:49.802148   35943 retry.go:31] will retry after 3.334677314s: waiting for machine to come up
	I0930 11:30:53.140758   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:53.141162   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:53.141198   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:53.141142   35943 retry.go:31] will retry after 4.392553354s: waiting for machine to come up
	I0930 11:30:57.535667   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536094   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536115   34720 main.go:141] libmachine: (ha-033260-m02) Found IP for machine: 192.168.39.3
	I0930 11:30:57.536128   34720 main.go:141] libmachine: (ha-033260-m02) Reserving static IP address...
	I0930 11:30:57.536667   34720 main.go:141] libmachine: (ha-033260-m02) Reserved static IP address: 192.168.39.3
	I0930 11:30:57.536690   34720 main.go:141] libmachine: (ha-033260-m02) Waiting for SSH to be available...
	I0930 11:30:57.536702   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.536717   34720 main.go:141] libmachine: (ha-033260-m02) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"}
	I0930 11:30:57.536733   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Getting to WaitForSSH function...
	I0930 11:30:57.538801   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539092   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.539118   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539287   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH client type: external
	I0930 11:30:57.539307   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa (-rw-------)
	I0930 11:30:57.539337   34720 main.go:141] libmachine: (ha-033260-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:57.539351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | About to run SSH command:
	I0930 11:30:57.539361   34720 main.go:141] libmachine: (ha-033260-m02) DBG | exit 0
	I0930 11:30:57.665932   34720 main.go:141] libmachine: (ha-033260-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:57.666273   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:30:57.666869   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:57.669186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669581   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.669611   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669933   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:57.670195   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:57.670214   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:57.670410   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.672489   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.672840   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.672867   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.673009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.673202   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673389   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673514   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.673661   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.673838   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.673848   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:57.786110   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:57.786133   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786377   34720 buildroot.go:166] provisioning hostname "ha-033260-m02"
	I0930 11:30:57.786400   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786574   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.789039   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789439   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.789465   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789633   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.789794   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.789948   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.790053   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.790195   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.790374   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.790385   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m02 && echo "ha-033260-m02" | sudo tee /etc/hostname
	I0930 11:30:57.917415   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m02
	
	I0930 11:30:57.917438   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.920154   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920496   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.920529   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920721   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.920892   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921046   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921171   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.921311   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.921493   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.921509   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:58.045391   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:58.045417   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:58.045437   34720 buildroot.go:174] setting up certificates
	I0930 11:30:58.045462   34720 provision.go:84] configureAuth start
	I0930 11:30:58.045479   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:58.045758   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.048321   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048721   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.048743   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048920   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.051229   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051564   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.051591   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051758   34720 provision.go:143] copyHostCerts
	I0930 11:30:58.051783   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051822   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:58.051830   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051885   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:58.051973   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.051994   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:58.051999   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.052023   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:58.052120   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052140   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:58.052144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:58.052236   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m02 san=[127.0.0.1 192.168.39.3 ha-033260-m02 localhost minikube]
	I0930 11:30:58.137309   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:58.137363   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:58.137388   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.139915   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140158   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.140185   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140386   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.140552   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.140695   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.140798   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.228976   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:58.229076   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:58.254635   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:58.254717   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:30:58.279904   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:58.279982   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:58.305451   34720 provision.go:87] duration metric: took 259.975115ms to configureAuth
	I0930 11:30:58.305480   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:58.305758   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:58.305834   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.308335   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.308803   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.308825   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.309009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.309198   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309332   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309439   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.309633   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.309804   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.309818   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:58.549247   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:58.549271   34720 machine.go:96] duration metric: took 879.062425ms to provisionDockerMachine
	I0930 11:30:58.549282   34720 start.go:293] postStartSetup for "ha-033260-m02" (driver="kvm2")
	I0930 11:30:58.549291   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:58.549307   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.549711   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:58.549753   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.552476   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.552924   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.552952   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.553077   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.553265   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.553440   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.553591   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.641113   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:58.645683   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:58.645710   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:58.645780   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:58.645871   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:58.645881   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:58.645976   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:58.656118   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:58.683428   34720 start.go:296] duration metric: took 134.134961ms for postStartSetup
	I0930 11:30:58.683471   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.683772   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:58.683796   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.686150   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686552   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.686580   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686712   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.686921   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.687033   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.687137   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.772957   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:58.773054   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:58.831207   34720 fix.go:56] duration metric: took 20.410809297s for fixHost
	I0930 11:30:58.831256   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.834153   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834531   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.834561   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834754   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.834963   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835129   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835280   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.835497   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.835715   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.835747   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:58.950852   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695858.923209005
	
	I0930 11:30:58.950874   34720 fix.go:216] guest clock: 1727695858.923209005
	I0930 11:30:58.950882   34720 fix.go:229] Guest: 2024-09-30 11:30:58.923209005 +0000 UTC Remote: 2024-09-30 11:30:58.831234705 +0000 UTC m=+335.558365405 (delta=91.9743ms)
	I0930 11:30:58.950897   34720 fix.go:200] guest clock delta is within tolerance: 91.9743ms
	I0930 11:30:58.950902   34720 start.go:83] releasing machines lock for "ha-033260-m02", held for 20.530522823s
	I0930 11:30:58.950922   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.951203   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.954037   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.954470   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.954495   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.956428   34720 out.go:177] * Found network options:
	I0930 11:30:58.958147   34720 out.go:177]   - NO_PROXY=192.168.39.249
	W0930 11:30:58.959662   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.959685   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960216   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960383   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960470   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:58.960516   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	W0930 11:30:58.960557   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.960638   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:58.960661   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.963506   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963693   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.963901   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964044   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964186   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964190   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.964217   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964364   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964379   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964505   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.964524   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964643   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964756   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:59.185932   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:59.192578   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:59.192645   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:59.212639   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:59.212663   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:59.212730   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:59.233596   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:59.248239   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:59.248310   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:59.262501   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:59.277031   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:59.408627   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:59.575087   34720 docker.go:233] disabling docker service ...
	I0930 11:30:59.575157   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:59.590510   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:59.605151   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:59.739478   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:59.876906   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:59.891632   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:59.911543   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:59.911601   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.923050   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:59.923114   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.934427   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.945682   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.957111   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:59.968813   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.980975   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.999767   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:31:00.011463   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:31:00.021740   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:31:00.021804   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:31:00.036575   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:31:00.046724   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:00.166031   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:31:00.263048   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:31:00.263104   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:31:00.268250   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:31:00.268319   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:31:00.272426   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:31:00.321494   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:31:00.321561   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.350506   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.381505   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:31:00.383057   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:31:00.384433   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:31:00.387430   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.387871   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:31:00.387903   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.388092   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:31:00.392819   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:00.406199   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:31:00.406474   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:00.406842   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.406891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.421565   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0930 11:31:00.422022   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.422477   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.422501   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.422814   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.423031   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:31:00.424747   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:31:00.425025   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.425059   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.439760   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0930 11:31:00.440237   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.440699   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.440716   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.441029   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.441215   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:31:00.441357   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.3
	I0930 11:31:00.441367   34720 certs.go:194] generating shared ca certs ...
	I0930 11:31:00.441380   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.441501   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:31:00.441541   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:31:00.441555   34720 certs.go:256] generating profile certs ...
	I0930 11:31:00.441653   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:31:00.441679   34720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173
	I0930 11:31:00.441696   34720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:31:00.711479   34720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 ...
	I0930 11:31:00.711512   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173: {Name:mk8969b2efcc5de06d527c6abe25d7f8f8bfba86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711706   34720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 ...
	I0930 11:31:00.711723   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173: {Name:mkcb971c29eb187169c6672af3a12c14dd523134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711815   34720 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:31:00.711977   34720 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:31:00.712110   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:31:00.712126   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:31:00.712141   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:31:00.712175   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:31:00.712192   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:31:00.712204   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:31:00.712217   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:31:00.712228   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:31:00.712238   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:31:00.712287   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:31:00.712314   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:31:00.712324   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:31:00.712348   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:31:00.712369   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:31:00.712408   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:31:00.712446   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:31:00.712473   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:31:00.712487   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:00.712499   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:31:00.712528   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:31:00.715756   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716154   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:31:00.716181   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716374   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:31:00.716558   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:31:00.716720   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:31:00.716893   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:31:00.794084   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:31:00.799675   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:31:00.812361   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:31:00.817141   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:31:00.828855   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:31:00.833566   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:31:00.844934   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:31:00.849462   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:31:00.860080   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:31:00.864183   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:31:00.875695   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:31:00.880202   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:31:00.891130   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:31:00.918693   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:31:00.944303   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:31:00.969526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:31:00.996710   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:31:01.023015   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:31:01.050381   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:31:01.076757   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:31:01.103526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:31:01.129114   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:31:01.155177   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:31:01.180954   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:31:01.199391   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:31:01.218184   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:31:01.238266   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:31:01.258183   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:31:01.276632   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:31:01.294303   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:31:01.312244   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:31:01.318735   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:31:01.330839   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.335928   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.336000   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.342463   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:31:01.353941   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:31:01.365658   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370653   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370714   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.376795   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:31:01.388155   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:31:01.399831   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404901   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404967   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.411138   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:31:01.422294   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:31:01.426988   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:31:01.433816   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:31:01.440682   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:31:01.447200   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:31:01.454055   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:31:01.460508   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:31:01.466735   34720 kubeadm.go:934] updating node {m02 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 11:31:01.466882   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:31:01.466926   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:31:01.466986   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:31:01.485425   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:31:01.485500   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:31:01.485555   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:31:01.495844   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:31:01.495903   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:31:01.505526   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0930 11:31:01.523077   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:31:01.540915   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:31:01.558204   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:31:01.562410   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:01.575484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.701502   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.719655   34720 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:31:01.719937   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:01.723162   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:31:01.724484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.910906   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.933340   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:31:01.933718   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:31:01.933803   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:31:01.934081   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:01.934248   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:01.934259   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:01.934274   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:01.934285   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:06.735523   34720 round_trippers.go:574] Response Status:  in 4801 milliseconds
	I0930 11:31:07.735873   34720 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735937   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735944   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:07.735954   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:07.735960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:17.737130   34720 round_trippers.go:574] Response Status:  in 10001 milliseconds
	I0930 11:31:17.737228   34720 node_ready.go:53] error getting node "ha-033260-m02": Get "https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.39.1:51024->192.168.39.249:8443: read: connection reset by peer
	I0930 11:31:17.737312   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:17.737324   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:17.737335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:17.737343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.500223   34720 round_trippers.go:574] Response Status: 200 OK in 3762 milliseconds
	I0930 11:31:21.501292   34720 node_ready.go:53] node "ha-033260-m02" has status "Ready":"Unknown"
	I0930 11:31:21.501373   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.501386   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.501397   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.501404   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.519310   34720 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0930 11:31:21.934926   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.934946   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.934956   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.934960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.940164   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:22.434503   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.434527   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.434544   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.434553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.438661   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:22.934869   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.934914   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.934923   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.934927   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.937891   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.435280   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.435301   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.435309   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.435314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.441790   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.444141   34720 node_ready.go:49] node "ha-033260-m02" has status "Ready":"True"
	I0930 11:31:23.444180   34720 node_ready.go:38] duration metric: took 21.510052339s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:23.444195   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:23.444252   34720 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:31:23.444273   34720 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:31:23.444364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:23.444380   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.444392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.444401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.454505   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:23.465935   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.466047   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:31:23.466061   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.466072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.466081   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.474857   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:23.475614   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.475635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.475647   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.475654   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.478510   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.479069   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.479097   34720 pod_ready.go:82] duration metric: took 13.131126ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479109   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479186   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:31:23.479199   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.479208   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.479213   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.485985   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.486909   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.486931   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.486941   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.486947   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490284   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.490832   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.490853   34720 pod_ready.go:82] duration metric: took 11.73655ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490864   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:31:23.490962   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.490972   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.498681   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:23.499421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.499443   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.499460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.499466   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.503369   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.503948   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.503974   34720 pod_ready.go:82] duration metric: took 13.102363ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.503986   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.504068   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:23.504080   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.504090   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.504097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.510528   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.511092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.511107   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.511115   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.511122   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.515703   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:24.004536   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.004560   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.004580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.004588   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.008341   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.009009   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.009023   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.009030   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.009038   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.011924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:24.504942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.504982   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.504991   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.504996   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.508600   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.509408   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.509428   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.509437   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.509441   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.512140   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.005082   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.005104   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.005112   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.005115   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.008608   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:25.009145   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.009159   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.009166   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.009172   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.012052   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.505333   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.505422   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.505445   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.505470   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.544680   34720 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0930 11:31:25.545744   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.545758   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.545766   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.545771   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.559955   34720 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0930 11:31:25.560548   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:26.004848   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.004869   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.004877   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.004881   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.008562   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.009380   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.009397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.009407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.009413   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.012491   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.504290   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.504315   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.504327   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.504335   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.508059   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.508795   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.508813   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.508823   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.508828   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.512273   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.004525   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.004546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.004555   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.004560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009158   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:27.009942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.009959   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.009967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.013093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.505035   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.505082   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.505093   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.505100   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.508864   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.509652   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.509670   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.509681   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.509687   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.512440   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:28.005011   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.005040   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.005051   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.005058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.013343   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:28.014728   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.014745   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.014754   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.014758   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.036177   34720 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0930 11:31:28.037424   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:28.504206   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.504241   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.504249   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.504254   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.511361   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:28.512356   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.512373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.512383   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.512389   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.525172   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:31:29.005163   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.005184   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.005195   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.005200   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.010684   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.011486   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.011516   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.011528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.011535   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.017470   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.505132   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.505152   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.505162   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.505168   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.518955   34720 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0930 11:31:29.519584   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.519602   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.519612   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.519619   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.530475   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:30.004860   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.004881   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.004889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.004893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.008564   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.009192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.009207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.009215   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.009220   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.013399   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.504171   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.504195   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.504205   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.504210   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.507972   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.509257   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.509275   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.509283   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.509286   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.513975   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.514510   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:31.004737   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.004765   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.004775   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.004780   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010196   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:31.010880   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.010900   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.010912   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010919   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.014567   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:31.504379   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.504397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.504405   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.504409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.511899   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:31.513088   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.513111   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.513122   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.513128   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.516398   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.005079   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.005119   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.005131   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.005138   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.009300   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:32.010097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.010118   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.010130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.010137   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.013237   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.505168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.505192   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.505203   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.505209   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.509155   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.509935   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.509953   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.509960   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.509964   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.513296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:33.004767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.004802   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.004812   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.004818   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.009316   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:33.009983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.009997   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.010005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.010018   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.012955   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:33.013498   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:33.504397   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.504432   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.504443   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.504450   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.620464   34720 round_trippers.go:574] Response Status: 200 OK in 115 milliseconds
	I0930 11:31:33.621445   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.621467   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.621479   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.621486   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.624318   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.004311   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:34.004332   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.004341   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.004346   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.008601   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.009530   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.009546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.009553   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.009556   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.013047   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.013767   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.013788   34720 pod_ready.go:82] duration metric: took 10.509794387s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013800   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013877   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:31:34.013888   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.013899   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.013908   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.021427   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:34.022374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.022393   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.022405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.022412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.026491   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.027124   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.027154   34720 pod_ready.go:82] duration metric: took 13.341195ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027184   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027276   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:31:34.027289   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.027300   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.027306   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.031483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.032050   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.032064   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.032072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.032075   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.035296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.035760   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.035779   34720 pod_ready.go:82] duration metric: took 8.586877ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035787   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035853   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:31:34.035863   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.035870   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.035874   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.040970   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.041904   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.041918   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.041926   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.041929   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.046986   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.047525   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.047542   34720 pod_ready.go:82] duration metric: took 11.747596ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047550   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047603   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:31:34.047611   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.047617   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.047621   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.053430   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.054003   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.054018   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.054025   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.054029   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.056888   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.057338   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.057358   34720 pod_ready.go:82] duration metric: took 9.802193ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.057367   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.204770   34720 request.go:632] Waited for 147.330113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204839   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204844   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.204851   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.204860   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.209352   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.404334   34720 request.go:632] Waited for 194.306843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404424   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.404441   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.404444   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.408185   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.605268   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.605293   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.605306   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.605311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.608441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.804521   34720 request.go:632] Waited for 195.318558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804587   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804592   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.804600   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.804607   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.808658   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.058569   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.058598   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.058609   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.058614   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.062153   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.204479   34720 request.go:632] Waited for 141.249746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204567   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204575   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.204586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.204594   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.209332   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.558083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.558103   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.558111   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.558116   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.562046   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.605131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.605167   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.605179   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.605184   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.616080   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:36.058179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:36.058207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.058218   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.058236   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.062566   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:36.063353   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:36.063373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.063384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.063390   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.066635   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.067352   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.067373   34720 pod_ready.go:82] duration metric: took 2.009999965s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.067382   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.204802   34720 request.go:632] Waited for 137.362306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204868   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204890   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.204901   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.204907   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.208231   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.404396   34720 request.go:632] Waited for 195.331717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404465   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.404473   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.404477   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.408489   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.409278   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.409299   34720 pod_ready.go:82] duration metric: took 341.910503ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.409308   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.604639   34720 request.go:632] Waited for 195.258772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604699   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604706   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.604716   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.604721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.608453   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.804560   34720 request.go:632] Waited for 195.30805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804622   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.804645   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.804651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.808127   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.808836   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.808857   34720 pod_ready.go:82] duration metric: took 399.543561ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.808867   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.004923   34720 request.go:632] Waited for 195.985958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004973   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004978   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.004985   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.004989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.008223   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.205282   34720 request.go:632] Waited for 196.371879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205357   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205362   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.205369   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.205374   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.208700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.209207   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.209239   34720 pod_ready.go:82] duration metric: took 400.365138ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.209250   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.405282   34720 request.go:632] Waited for 195.959121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405389   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405398   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.405409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.405429   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.409314   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.605347   34720 request.go:632] Waited for 195.282379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.605450   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.605459   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.608764   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.609479   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.609498   34720 pod_ready.go:82] duration metric: took 400.240233ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.609507   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.804579   34720 request.go:632] Waited for 195.010584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804657   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804664   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.804671   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.804675   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.808363   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.005248   34720 request.go:632] Waited for 196.304263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005314   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005321   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.005330   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.005333   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.009635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:38.010535   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.010557   34720 pod_ready.go:82] duration metric: took 401.042919ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.010566   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.204595   34720 request.go:632] Waited for 193.96721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204665   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204677   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.204689   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.204696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.208393   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.404559   34720 request.go:632] Waited for 195.429784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404620   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.404641   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.404646   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.408057   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.408674   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.408694   34720 pod_ready.go:82] duration metric: took 398.12275ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.408703   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.605374   34720 request.go:632] Waited for 196.589593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605437   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.605444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.605449   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.609411   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.804516   34720 request.go:632] Waited for 194.287587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804579   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.804586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.804589   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.808043   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.808604   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.808623   34720 pod_ready.go:82] duration metric: took 399.91394ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.808637   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.004815   34720 request.go:632] Waited for 196.10639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004881   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004887   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.004895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.004900   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.008293   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.204330   34720 request.go:632] Waited for 195.292523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204402   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204410   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.204419   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.204428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.208212   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.208803   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.208826   34720 pod_ready.go:82] duration metric: took 400.181261ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.208843   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.404860   34720 request.go:632] Waited for 195.933233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404913   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404919   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.404926   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.404931   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.408874   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.604903   34720 request.go:632] Waited for 195.413864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604970   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604975   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.604983   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.604987   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.608209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.608764   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.608784   34720 pod_ready.go:82] duration metric: took 399.933732ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.608794   34720 pod_ready.go:39] duration metric: took 16.164585673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:39.608807   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:31:39.608855   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:31:39.626199   34720 api_server.go:72] duration metric: took 37.906495975s to wait for apiserver process to appear ...
	I0930 11:31:39.626228   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:31:39.626249   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:31:39.630779   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:31:39.630856   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:31:39.630864   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.630872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.630879   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.631851   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:31:39.631971   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:31:39.631987   34720 api_server.go:131] duration metric: took 5.751654ms to wait for apiserver health ...
	I0930 11:31:39.631994   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:31:39.805247   34720 request.go:632] Waited for 173.189912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805322   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805328   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.805335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.805339   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.811658   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:39.818704   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:31:39.818737   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818745   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818751   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:39.818754   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:39.818758   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:39.818761   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:39.818766   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:39.818769   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:39.818772   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:39.818777   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:39.818781   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:39.818787   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:39.818792   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:39.818797   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:39.818803   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:39.818809   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:39.818814   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:39.818820   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:39.818828   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:39.818834   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:39.818840   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:39.818843   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:39.818846   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:39.818852   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:39.818855   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:39.818858   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:39.818864   34720 system_pods.go:74] duration metric: took 186.864889ms to wait for pod list to return data ...
	I0930 11:31:39.818873   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:31:40.005326   34720 request.go:632] Waited for 186.370068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005389   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.005396   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.005401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.009301   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.009537   34720 default_sa.go:45] found service account: "default"
	I0930 11:31:40.009555   34720 default_sa.go:55] duration metric: took 190.676192ms for default service account to be created ...
	I0930 11:31:40.009564   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:31:40.205063   34720 request.go:632] Waited for 195.430952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205139   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.205147   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.205150   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.210696   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:40.219002   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:31:40.219052   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219065   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219074   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:40.219081   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:40.219086   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:40.219092   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:40.219097   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:40.219103   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:40.219108   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:40.219115   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:40.219123   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:40.219130   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:40.219137   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:40.219145   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:40.219149   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:40.219155   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:40.219158   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:40.219162   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:40.219168   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:40.219171   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:40.219177   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:40.219181   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:40.219186   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:40.219190   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:40.219193   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:40.219196   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:40.219204   34720 system_pods.go:126] duration metric: took 209.632746ms to wait for k8s-apps to be running ...
	I0930 11:31:40.219213   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:31:40.219257   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:31:40.234570   34720 system_svc.go:56] duration metric: took 15.34883ms WaitForService to wait for kubelet
	I0930 11:31:40.234600   34720 kubeadm.go:582] duration metric: took 38.514901899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:31:40.234618   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:31:40.405060   34720 request.go:632] Waited for 170.372351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405138   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.405146   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.405152   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.409008   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.411040   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411072   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411093   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411098   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411104   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411112   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411118   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411123   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411130   34720 node_conditions.go:105] duration metric: took 176.506295ms to run NodePressure ...
	I0930 11:31:40.411143   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:31:40.411178   34720 start.go:255] writing updated cluster config ...
	I0930 11:31:40.413535   34720 out.go:201] 
	I0930 11:31:40.415246   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:40.415334   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.417113   34720 out.go:177] * Starting "ha-033260-m03" control-plane node in "ha-033260" cluster
	I0930 11:31:40.418650   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:31:40.418678   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:31:40.418775   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:31:40.418789   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:31:40.418878   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.419069   34720 start.go:360] acquireMachinesLock for ha-033260-m03: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:31:40.419116   34720 start.go:364] duration metric: took 28.328µs to acquireMachinesLock for "ha-033260-m03"
	I0930 11:31:40.419128   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:31:40.419133   34720 fix.go:54] fixHost starting: m03
	I0930 11:31:40.419393   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:40.419421   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:40.434730   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0930 11:31:40.435197   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:40.435685   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:40.435709   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:40.436046   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:40.436205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:40.436359   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:31:40.437971   34720 fix.go:112] recreateIfNeeded on ha-033260-m03: state=Stopped err=<nil>
	I0930 11:31:40.437995   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	W0930 11:31:40.438139   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:31:40.440134   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m03" ...
	I0930 11:31:40.441557   34720 main.go:141] libmachine: (ha-033260-m03) Calling .Start
	I0930 11:31:40.441787   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring networks are active...
	I0930 11:31:40.442656   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network default is active
	I0930 11:31:40.442963   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network mk-ha-033260 is active
	I0930 11:31:40.443304   34720 main.go:141] libmachine: (ha-033260-m03) Getting domain xml...
	I0930 11:31:40.443900   34720 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:31:41.716523   34720 main.go:141] libmachine: (ha-033260-m03) Waiting to get IP...
	I0930 11:31:41.717310   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.717755   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.717843   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.717745   36275 retry.go:31] will retry after 213.974657ms: waiting for machine to come up
	I0930 11:31:41.933006   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.933445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.933470   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.933400   36275 retry.go:31] will retry after 366.443935ms: waiting for machine to come up
	I0930 11:31:42.300826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.301240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.301268   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.301200   36275 retry.go:31] will retry after 298.736034ms: waiting for machine to come up
	I0930 11:31:42.601863   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.602344   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.602373   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.602300   36275 retry.go:31] will retry after 422.049065ms: waiting for machine to come up
	I0930 11:31:43.025989   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.026495   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.026518   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.026460   36275 retry.go:31] will retry after 501.182735ms: waiting for machine to come up
	I0930 11:31:43.529199   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.529601   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.529643   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.529556   36275 retry.go:31] will retry after 658.388185ms: waiting for machine to come up
	I0930 11:31:44.189982   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:44.190445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:44.190485   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:44.190396   36275 retry.go:31] will retry after 869.323325ms: waiting for machine to come up
	I0930 11:31:45.061299   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:45.061826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:45.061855   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:45.061762   36275 retry.go:31] will retry after 1.477543518s: waiting for machine to come up
	I0930 11:31:46.540654   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:46.541062   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:46.541088   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:46.541024   36275 retry.go:31] will retry after 1.217619831s: waiting for machine to come up
	I0930 11:31:47.760283   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:47.760670   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:47.760692   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:47.760626   36275 retry.go:31] will retry after 1.524149013s: waiting for machine to come up
	I0930 11:31:49.286693   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:49.287173   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:49.287205   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:49.287119   36275 retry.go:31] will retry after 2.547999807s: waiting for machine to come up
	I0930 11:31:51.836378   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:51.836878   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:51.836903   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:51.836847   36275 retry.go:31] will retry after 3.478582774s: waiting for machine to come up
	I0930 11:31:55.318753   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:55.319267   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:55.319288   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:55.319225   36275 retry.go:31] will retry after 4.232487143s: waiting for machine to come up
	I0930 11:31:59.554587   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555031   34720 main.go:141] libmachine: (ha-033260-m03) Found IP for machine: 192.168.39.238
	I0930 11:31:59.555054   34720 main.go:141] libmachine: (ha-033260-m03) Reserving static IP address...
	I0930 11:31:59.555067   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555464   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.555482   34720 main.go:141] libmachine: (ha-033260-m03) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"}
	I0930 11:31:59.555498   34720 main.go:141] libmachine: (ha-033260-m03) Reserved static IP address: 192.168.39.238
	I0930 11:31:59.555507   34720 main.go:141] libmachine: (ha-033260-m03) Waiting for SSH to be available...
	I0930 11:31:59.555514   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:31:59.558171   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558619   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.558660   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558780   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:31:59.558806   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:31:59.558840   34720 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:31:59.558849   34720 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:31:59.558869   34720 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:31:59.689497   34720 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 11:31:59.689854   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:31:59.690426   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:31:59.692709   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693063   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.693096   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693354   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:59.693555   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:31:59.693570   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:59.693768   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.695742   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696024   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.696050   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.696286   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696441   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696600   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.696763   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.696989   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.697005   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:31:59.810353   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:31:59.810380   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810618   34720 buildroot.go:166] provisioning hostname "ha-033260-m03"
	I0930 11:31:59.810647   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810829   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.813335   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813637   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.813661   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813848   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.814001   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814334   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.814507   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.814661   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.814672   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m03 && echo "ha-033260-m03" | sudo tee /etc/hostname
	I0930 11:31:59.949653   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m03
	
	I0930 11:31:59.949686   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.952597   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.952969   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.952992   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.953242   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.953469   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953637   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953759   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.953884   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.954062   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.954084   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:00.079890   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:00.079918   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:00.079939   34720 buildroot.go:174] setting up certificates
	I0930 11:32:00.079950   34720 provision.go:84] configureAuth start
	I0930 11:32:00.079961   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:32:00.080205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:00.082895   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083281   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.083307   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083437   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.085443   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085756   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.085776   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085897   34720 provision.go:143] copyHostCerts
	I0930 11:32:00.085925   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.085978   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:00.085987   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.086050   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:00.086121   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086137   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:00.086142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:00.086219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086243   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:00.086252   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086288   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:00.086360   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m03 san=[127.0.0.1 192.168.39.238 ha-033260-m03 localhost minikube]
	I0930 11:32:00.252602   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:00.252654   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:00.252676   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.255361   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255706   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.255731   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255860   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.255996   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.256131   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.256249   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.345059   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:00.345126   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:00.370752   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:00.370827   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:32:00.397640   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:00.397703   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:00.424094   34720 provision.go:87] duration metric: took 344.128805ms to configureAuth
	I0930 11:32:00.424128   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:00.424360   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:00.424480   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.427139   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427536   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.427563   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427770   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.427949   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428043   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428125   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.428217   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.428408   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.428424   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:00.687881   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:00.687919   34720 machine.go:96] duration metric: took 994.35116ms to provisionDockerMachine
	I0930 11:32:00.687935   34720 start.go:293] postStartSetup for "ha-033260-m03" (driver="kvm2")
	I0930 11:32:00.687950   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:00.687976   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.688322   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:00.688349   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.691216   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691735   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.691763   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691959   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.692185   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.692344   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.692469   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.781946   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:00.786396   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:00.786417   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:00.786494   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:00.786562   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:00.786571   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:00.786646   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:00.796771   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:00.822239   34720 start.go:296] duration metric: took 134.285857ms for postStartSetup
	I0930 11:32:00.822297   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.822594   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:00.822622   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.825375   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825743   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.825764   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825954   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.826142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.826331   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.826492   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.912681   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:00.912751   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:00.970261   34720 fix.go:56] duration metric: took 20.551120789s for fixHost
	I0930 11:32:00.970311   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.973284   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973694   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.973722   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973873   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.974035   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974161   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974267   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.974426   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.974622   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.974633   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:01.099052   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695921.066520843
	
	I0930 11:32:01.099078   34720 fix.go:216] guest clock: 1727695921.066520843
	I0930 11:32:01.099089   34720 fix.go:229] Guest: 2024-09-30 11:32:01.066520843 +0000 UTC Remote: 2024-09-30 11:32:00.970290394 +0000 UTC m=+397.697421093 (delta=96.230449ms)
	I0930 11:32:01.099110   34720 fix.go:200] guest clock delta is within tolerance: 96.230449ms
	I0930 11:32:01.099117   34720 start.go:83] releasing machines lock for "ha-033260-m03", held for 20.679993634s
	I0930 11:32:01.099137   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.099384   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:01.102141   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.102593   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.102620   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.104827   34720 out.go:177] * Found network options:
	I0930 11:32:01.106181   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3
	W0930 11:32:01.107308   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.107329   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.107343   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.107885   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108079   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108167   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:01.108222   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:32:01.108292   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.108316   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.108408   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:01.108430   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:01.111240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111542   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111663   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111698   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111858   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.111861   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111893   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.112028   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.112064   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112182   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112189   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112347   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.112360   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112529   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.339136   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:01.345573   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:01.345659   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:01.362608   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:01.362630   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:01.362686   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:01.381024   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:01.396259   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:01.396333   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:01.412406   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:01.429258   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:01.562463   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:01.730591   34720 docker.go:233] disabling docker service ...
	I0930 11:32:01.730664   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:01.755797   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:01.769489   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:01.890988   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:02.019465   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:02.036168   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:02.059913   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:02.059981   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.072160   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:02.072247   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.084599   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.096290   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.108573   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:02.120977   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.132246   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.150591   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.162524   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:02.173575   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:02.173660   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:02.188268   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:02.199979   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:02.326960   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:02.439885   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:02.439960   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:02.446734   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:02.446849   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:02.451344   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:02.492029   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:02.492116   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.521734   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.556068   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:02.557555   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:02.558901   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:02.560920   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:02.563759   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564191   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:02.564218   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564482   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:02.569571   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:02.585245   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:02.585463   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:02.585746   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.585790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.617422   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0930 11:32:02.617831   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.618295   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.618314   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.618694   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.618907   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:02.621016   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:02.621337   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.621378   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.636969   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46463
	I0930 11:32:02.637538   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.638051   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.638068   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.638431   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.638769   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:02.639005   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.238
	I0930 11:32:02.639018   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:02.639031   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:02.639158   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:02.639204   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:02.639213   34720 certs.go:256] generating profile certs ...
	I0930 11:32:02.639277   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:32:02.639334   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37
	I0930 11:32:02.639369   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:32:02.639382   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:02.639398   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:02.639410   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:02.639423   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:02.639436   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:32:02.639451   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:32:02.639464   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:32:02.639477   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:32:02.639526   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:02.639556   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:02.639565   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:02.639587   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:02.639609   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:02.639654   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:02.639691   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:02.639715   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:02.639728   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:02.639740   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:02.639764   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:32:02.643357   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.643807   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:32:02.643839   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.644023   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:32:02.644227   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:32:02.644414   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:32:02.644553   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:32:02.726043   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:32:02.732664   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:32:02.744611   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:32:02.750045   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:32:02.763417   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:32:02.768220   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:32:02.780605   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:32:02.786158   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:32:02.802503   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:32:02.809377   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:32:02.821900   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:32:02.827740   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:32:02.842110   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:02.872987   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:02.903102   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:02.932917   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:02.966742   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:32:02.995977   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:32:03.025802   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:32:03.057227   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:32:03.085425   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:03.115042   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:03.142328   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:03.168248   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:32:03.189265   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:32:03.208719   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:32:03.227953   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:32:03.248805   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:32:03.268786   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:32:03.288511   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:32:03.309413   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:03.315862   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:03.328610   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333839   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333909   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.340595   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:03.353343   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:03.364689   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369580   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369669   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.376067   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:03.388290   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:03.400003   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405168   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405235   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.411812   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:03.424569   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:03.429588   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:32:03.436748   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:32:03.443675   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:32:03.450618   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:32:03.457889   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:32:03.464815   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:32:03.471778   34720 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.1 crio true true} ...
	I0930 11:32:03.471887   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:03.471924   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:32:03.471974   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:32:03.490629   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:32:03.490701   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:32:03.490761   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:03.502695   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:03.502771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:32:03.514300   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:03.532840   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:03.552583   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:32:03.570717   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:03.574725   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:03.588635   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.736031   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.755347   34720 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:32:03.755606   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:03.757343   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:03.758664   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.930799   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.947764   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:03.948004   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:03.948058   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:03.948281   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.948378   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:03.948390   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.948398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.948408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.951644   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.952631   34720 node_ready.go:49] node "ha-033260-m03" has status "Ready":"True"
	I0930 11:32:03.952655   34720 node_ready.go:38] duration metric: took 4.354654ms for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.952666   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:03.952741   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:03.952751   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.952758   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.952763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.959043   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:03.966223   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:03.966318   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:03.966326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.966334   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.966341   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.969582   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.970409   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:03.970425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.970433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.970436   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.973995   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.466604   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.466626   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.466634   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.466638   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.470966   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.470982   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.470989   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470994   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.473518   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:04.966613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.966634   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.966642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.966647   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.970295   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.971225   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.971247   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.971256   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.971267   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.974506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.466575   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.466597   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.466605   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.466609   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.471476   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.472347   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.472369   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.472379   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.472385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.476605   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.966462   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.966484   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.966495   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.966499   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.970347   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.971438   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.971455   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.971465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.971469   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.975635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.976454   34720 pod_ready.go:103] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:06.466781   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.466807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.466818   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.466825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.470300   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.471083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.471100   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.471108   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.471111   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.474455   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.966864   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.966887   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.966895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.966899   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.970946   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:06.971993   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.972007   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.972014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.972021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.975563   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.466626   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.466651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.466664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.466671   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.471030   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:07.471751   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.471767   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.471775   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.471780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.475078   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.966446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.966464   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.966472   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.966476   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.970130   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.970892   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.970907   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.970916   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.970921   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.974558   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.467355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:08.467382   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.467392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.467398   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.491602   34720 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0930 11:32:08.492458   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.492478   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.492488   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.492494   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.504709   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:32:08.505926   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.505961   34720 pod_ready.go:82] duration metric: took 4.539705143s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.505976   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.506053   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:08.506070   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.506079   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.506091   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.513015   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:08.514472   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.514492   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.514500   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.514504   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.522097   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:08.522597   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.522620   34720 pod_ready.go:82] duration metric: took 16.634648ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522632   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522710   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:08.522720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.522730   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.522736   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.528114   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:08.529205   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.529222   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.529239   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.529245   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.532511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.533059   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.533085   34720 pod_ready.go:82] duration metric: took 10.444686ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533097   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:08.533175   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.533185   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.533194   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.536360   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.537030   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:08.537046   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.537054   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.537058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.540241   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.540684   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.540702   34720 pod_ready.go:82] duration metric: took 7.598243ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540712   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540774   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:08.540782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.540789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.540794   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.544599   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.545135   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:08.545150   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.545158   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.545161   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.548627   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.041691   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.041715   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.041724   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.041728   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.045686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.046390   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.046409   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.046420   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.046428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.050351   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.541239   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.541273   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.541285   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.541291   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.544605   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.545287   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.545303   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.545311   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.545314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.548353   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.041331   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.041356   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.041368   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.041373   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.045200   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.046010   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.046031   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.046039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.046046   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.049179   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.541488   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.541515   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.541528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.541536   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.545641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:10.546377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.546400   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.546407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.546410   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.549732   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.550616   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:11.040952   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.040974   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.040982   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.040989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.046528   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:11.047555   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.047571   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.047581   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.047586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.051499   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:11.541109   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.541139   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.541149   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.541154   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.545483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:11.546103   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.546119   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.546130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.546136   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.549272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:12.041130   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.041165   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.041176   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.041182   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.045465   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.046261   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.046277   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.046284   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.046289   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.054233   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:12.540971   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.540992   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.541000   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.541004   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.545075   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.545773   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.545789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.545799   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.545805   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.549003   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.041785   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.041807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.041817   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.041823   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.045506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.046197   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.046214   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.046221   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.046241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.048544   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:13.048911   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:13.541700   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.541728   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.541740   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.541748   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.545726   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.546727   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.546742   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.546749   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.546753   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.549687   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:14.041571   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.041593   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.041601   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.041605   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.045629   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.047164   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.047185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.047199   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.047203   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.052005   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:14.541017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.541043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.541055   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.541060   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.545027   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.546245   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.546266   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.546275   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.546280   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.549572   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.041446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.041468   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.041477   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.041481   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.045111   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.045983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.046004   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.046014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.046021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.055916   34720 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0930 11:32:15.056489   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:15.541417   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.541448   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.541460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.541465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.544952   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.545764   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.545781   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.545790   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.545795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.552050   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:16.040979   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.041003   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.041011   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.041016   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.045765   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:16.046411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.046427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.046435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.046439   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.056745   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:32:16.541660   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.541682   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.541692   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.541696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.545213   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:16.546092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.546110   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.546121   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.546126   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.548900   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.041375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.041399   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.041411   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.041417   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.045641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:17.046588   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.046611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.046621   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.046628   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.049632   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.541651   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.541676   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.541686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.541692   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.545407   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:17.546246   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.546269   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.546282   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.546290   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.549117   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.549778   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:18.041518   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.041556   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.041568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.041576   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:18.046748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.046769   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.046780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046787   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.052283   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:18.541399   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.541425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.541433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.541437   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.545011   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:18.546056   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.546078   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.546089   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.546097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.549203   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.041166   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.041201   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.041210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.041214   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.045755   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.046481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.046500   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.046510   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.046517   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.049924   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.541836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.541873   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.541885   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.541893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.546183   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.547097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.547116   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.547126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.547130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.551235   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.551688   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:20.041000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.041027   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.041039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.041053   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.045149   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.045912   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.045934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.045945   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.045950   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.049525   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:20.541792   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.541813   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.541821   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.541825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.546083   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.546947   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.546969   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.546980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.546988   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.551303   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:21.041910   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.041938   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.041950   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.041955   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.047824   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:21.048523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.048544   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.048555   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.048560   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.051690   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.541671   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.541695   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.541707   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.541714   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.545187   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.545925   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.545943   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.545957   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.549146   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.040908   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.040934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.040944   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.040949   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.044322   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.045253   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.045275   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.045286   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.045311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.048540   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.049217   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:22.541377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.541397   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.541405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.596027   34720 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0930 11:32:22.596840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.596858   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.596868   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.596876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.600101   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.041796   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.041817   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.041826   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.041830   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.046144   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:23.047374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.047396   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.047407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.047412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.051210   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.541365   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.541391   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.541403   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.544624   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.545332   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.545348   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.545356   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.545362   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.548076   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.040942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.040985   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.040995   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.040999   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.044909   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.045625   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.045642   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.045653   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.045658   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.048446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.541477   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.541497   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.541506   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.541509   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.545585   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:24.546447   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.546460   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.546468   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.546472   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.549497   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.550184   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:25.041599   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.041635   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.041645   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.041651   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.048106   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:25.048975   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.048998   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.049008   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.049013   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.054165   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:25.541178   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.541223   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.541235   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.541241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.545143   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:25.545923   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.545941   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.545962   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.549975   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.041161   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.041185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.041193   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.041199   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.045231   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:26.046025   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.046042   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.046049   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.046055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.048864   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:26.541487   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.541511   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.541521   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.541528   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.548114   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:26.548980   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.548993   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.549001   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.549005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.552757   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.553360   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:27.041590   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.041611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.041636   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.041639   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.046112   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:27.047076   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.047092   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.047100   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.047104   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.052347   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:27.541767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.541789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.541797   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.541801   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.545090   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:27.545664   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.545678   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.545686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.545690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.548839   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.041179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.041200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.041212   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.041217   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.046396   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:28.047355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.047372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.047384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.047388   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.053891   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:28.541237   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.541259   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.541268   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.541271   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545192   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.545941   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.545959   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.545967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.549204   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.550435   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.550457   34720 pod_ready.go:82] duration metric: took 20.009736872s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550559   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:32:28.550570   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.550580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.550590   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.553686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.554394   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:28.554407   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.554414   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.554420   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.556924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.557578   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.557600   34720 pod_ready.go:82] duration metric: took 7.108562ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557612   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557692   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:32:28.557702   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.557712   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.557722   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.560446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.561014   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:28.561029   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.561036   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.561040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.563867   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.564450   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.564468   34720 pod_ready.go:82] duration metric: took 6.836659ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:28.564568   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.564578   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.564586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.567937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.568639   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.568653   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.568661   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.568664   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.571277   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:29.065431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.065458   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.065466   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.065469   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.069406   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.070004   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.070020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.070028   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.070033   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.073076   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.565018   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.565043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.565052   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.565055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.568350   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.569071   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.569090   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.569101   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.569107   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.572794   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.065688   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.065710   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.065717   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.065721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.069593   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.070370   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.070385   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.070393   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.070397   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.073099   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.565351   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.565372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.565380   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.565385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.568480   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.569460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.569481   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.569489   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.569493   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.572043   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.572542   34720 pod_ready.go:103] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:31.064934   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:31.064954   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.064963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.064967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.069154   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:31.070615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.070631   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.070642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.070648   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.073638   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.074233   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.074258   34720 pod_ready.go:82] duration metric: took 2.50976614s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074273   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:32:31.074392   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.074418   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.074427   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.077429   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.078309   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:31.078326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.078336   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.078343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.080937   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.081321   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.081341   34720 pod_ready.go:82] duration metric: took 7.059128ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081353   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081418   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:32:31.081428   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.081438   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.081447   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.084351   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.084930   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:31.084944   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.084951   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.084956   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.087905   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.088473   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.088493   34720 pod_ready.go:82] duration metric: took 7.129947ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.088504   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.141826   34720 request.go:632] Waited for 53.255293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141907   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141915   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.141924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.141929   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.145412   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.341415   34720 request.go:632] Waited for 195.313156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341506   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.341520   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.341524   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.344937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.589605   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.589637   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.589646   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.589651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.593330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.741775   34720 request.go:632] Waited for 147.33103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741847   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.741857   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.741869   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.745796   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.089735   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.089761   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.089772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.089776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.093492   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.141705   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.141744   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.141752   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.141757   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.145662   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.589384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.589408   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.589418   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.589426   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.592976   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.593954   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.593971   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.593979   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.593983   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.597157   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.089690   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:33.089720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.089733   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.089743   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.094817   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:33.095412   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:33.095427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.095435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.095442   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.098967   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.099551   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.099569   34720 pod_ready.go:82] duration metric: took 2.011056626s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.099580   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.141920   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:32:33.141953   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.141961   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.141965   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.146176   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:33.342278   34720 request.go:632] Waited for 195.329061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342343   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342351   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.342362   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.342368   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.346051   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.346626   34720 pod_ready.go:98] node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346650   34720 pod_ready.go:82] duration metric: took 247.062207ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	E0930 11:32:33.346662   34720 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346673   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.541732   34720 request.go:632] Waited for 194.984853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541823   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541832   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.541839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.541846   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.545738   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.741681   34720 request.go:632] Waited for 195.307104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741746   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741753   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.741839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.741853   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.745711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.746422   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.746442   34720 pod_ready.go:82] duration metric: took 399.762428ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.746454   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.941491   34720 request.go:632] Waited for 194.974915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941575   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.941582   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.941585   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.945250   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.142081   34720 request.go:632] Waited for 196.05781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142187   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142199   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.142207   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.142211   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.146079   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.146737   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.146756   34720 pod_ready.go:82] duration metric: took 400.295141ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.146770   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.342040   34720 request.go:632] Waited for 195.196365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342146   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342159   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.342171   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.342181   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.345711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.541794   34720 request.go:632] Waited for 195.310617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541870   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541876   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.541884   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.541889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.545585   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.546141   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.546158   34720 pod_ready.go:82] duration metric: took 399.379827ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.546174   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.742192   34720 request.go:632] Waited for 195.896441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742266   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742272   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.742279   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.742283   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.745382   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.941671   34720 request.go:632] Waited for 195.443927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941750   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941755   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.941763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.941767   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.945425   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.946182   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.946207   34720 pod_ready.go:82] duration metric: took 400.022007ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.946220   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.142264   34720 request.go:632] Waited for 195.977294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142349   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142355   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.142363   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.142372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.146093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.342119   34720 request.go:632] Waited for 195.354718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342174   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342179   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.342185   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.342189   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.345678   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.346226   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.346244   34720 pod_ready.go:82] duration metric: took 400.013115ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.346253   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.541907   34720 request.go:632] Waited for 195.545182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541986   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541995   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.542006   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.542018   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.545604   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.741571   34720 request.go:632] Waited for 195.370489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741659   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741667   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.741678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.741690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.745574   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.746159   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.746179   34720 pod_ready.go:82] duration metric: took 399.919057ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.746193   34720 pod_ready.go:39] duration metric: took 31.793515417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:35.746211   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:32:35.746295   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:32:35.770439   34720 api_server.go:72] duration metric: took 32.015036347s to wait for apiserver process to appear ...
	I0930 11:32:35.770467   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:32:35.770491   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:32:35.775724   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:32:35.775811   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:32:35.775820   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.775829   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.775838   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.776730   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:32:35.776791   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:32:35.776806   34720 api_server.go:131] duration metric: took 6.332786ms to wait for apiserver health ...
	I0930 11:32:35.776814   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:32:35.942219   34720 request.go:632] Waited for 165.338166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942284   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942290   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.942302   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.942308   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.948613   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:35.956880   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:32:35.956918   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:35.956927   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:35.956932   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:35.956938   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:35.956942   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:35.956947   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:35.956951   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:35.956956   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:35.956960   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:35.956965   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:35.956971   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:35.956977   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:35.956988   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:35.956996   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:35.957001   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:35.957009   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:35.957014   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:35.957019   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:35.957027   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:35.957033   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:35.957041   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:35.957046   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:35.957053   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:35.957058   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:35.957066   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:35.957070   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:35.957081   34720 system_pods.go:74] duration metric: took 180.260558ms to wait for pod list to return data ...
	I0930 11:32:35.957093   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:32:36.141557   34720 request.go:632] Waited for 184.369505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141646   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141655   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.141664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.141669   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.146009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.146146   34720 default_sa.go:45] found service account: "default"
	I0930 11:32:36.146163   34720 default_sa.go:55] duration metric: took 189.061389ms for default service account to be created ...
	I0930 11:32:36.146176   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:32:36.341683   34720 request.go:632] Waited for 195.43917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341772   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.341789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.341795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.348026   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:36.355936   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:32:36.355974   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:36.355980   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:36.355985   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:36.355989   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:36.355993   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:36.355997   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:36.356000   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:36.356003   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:36.356007   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:36.356011   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:36.356015   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:36.356019   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:36.356022   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:36.356025   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:36.356028   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:36.356031   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:36.356034   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:36.356037   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:36.356041   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:36.356044   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:36.356050   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:36.356053   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:36.356059   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:36.356062   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:36.356065   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:36.356068   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:36.356075   34720 system_pods.go:126] duration metric: took 209.893533ms to wait for k8s-apps to be running ...
	I0930 11:32:36.356084   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:32:36.356128   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:32:36.376905   34720 system_svc.go:56] duration metric: took 20.807413ms WaitForService to wait for kubelet
	I0930 11:32:36.376934   34720 kubeadm.go:582] duration metric: took 32.621540674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:32:36.376952   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:32:36.541278   34720 request.go:632] Waited for 164.265532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541328   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541345   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.541372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.541378   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.545532   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.546930   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546950   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546960   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546964   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546970   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546975   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546980   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546984   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546989   34720 node_conditions.go:105] duration metric: took 170.032136ms to run NodePressure ...
	I0930 11:32:36.547003   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:32:36.547027   34720 start.go:255] writing updated cluster config ...
	I0930 11:32:36.548771   34720 out.go:201] 
	I0930 11:32:36.549990   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:36.550071   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.551533   34720 out.go:177] * Starting "ha-033260-m04" worker node in "ha-033260" cluster
	I0930 11:32:36.552654   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:32:36.552671   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:32:36.552768   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:32:36.552782   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:32:36.552887   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.553084   34720 start.go:360] acquireMachinesLock for ha-033260-m04: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:32:36.553130   34720 start.go:364] duration metric: took 26.329µs to acquireMachinesLock for "ha-033260-m04"
	I0930 11:32:36.553148   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:32:36.553160   34720 fix.go:54] fixHost starting: m04
	I0930 11:32:36.553451   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:36.553481   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:36.569922   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0930 11:32:36.570471   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:36.571044   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:36.571066   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:36.571377   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:36.571578   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:36.571759   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:32:36.573541   34720 fix.go:112] recreateIfNeeded on ha-033260-m04: state=Stopped err=<nil>
	I0930 11:32:36.573570   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	W0930 11:32:36.573771   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:32:36.575555   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m04" ...
	I0930 11:32:36.576772   34720 main.go:141] libmachine: (ha-033260-m04) Calling .Start
	I0930 11:32:36.576973   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring networks are active...
	I0930 11:32:36.577708   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network default is active
	I0930 11:32:36.578046   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network mk-ha-033260 is active
	I0930 11:32:36.578396   34720 main.go:141] libmachine: (ha-033260-m04) Getting domain xml...
	I0930 11:32:36.579052   34720 main.go:141] libmachine: (ha-033260-m04) Creating domain...
	I0930 11:32:37.876264   34720 main.go:141] libmachine: (ha-033260-m04) Waiting to get IP...
	I0930 11:32:37.877213   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:37.877645   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:37.877707   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:37.877598   36596 retry.go:31] will retry after 232.490022ms: waiting for machine to come up
	I0930 11:32:38.112070   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.112572   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.112594   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.112550   36596 retry.go:31] will retry after 256.882229ms: waiting for machine to come up
	I0930 11:32:38.371192   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.371815   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.371840   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.371754   36596 retry.go:31] will retry after 461.059855ms: waiting for machine to come up
	I0930 11:32:38.834060   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.834574   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.834602   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.834535   36596 retry.go:31] will retry after 561.972608ms: waiting for machine to come up
	I0930 11:32:39.398393   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:39.398837   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:39.398861   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:39.398804   36596 retry.go:31] will retry after 603.760478ms: waiting for machine to come up
	I0930 11:32:40.004623   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.004981   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.005003   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.004944   36596 retry.go:31] will retry after 795.659949ms: waiting for machine to come up
	I0930 11:32:40.802044   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.802482   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.802507   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.802432   36596 retry.go:31] will retry after 876.600506ms: waiting for machine to come up
	I0930 11:32:41.680956   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:41.681439   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:41.681475   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:41.681410   36596 retry.go:31] will retry after 1.356578507s: waiting for machine to come up
	I0930 11:32:43.039790   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:43.040245   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:43.040273   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:43.040181   36596 retry.go:31] will retry after 1.138308059s: waiting for machine to come up
	I0930 11:32:44.180454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:44.180880   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:44.180912   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:44.180838   36596 retry.go:31] will retry after 1.724095206s: waiting for machine to come up
	I0930 11:32:45.906969   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:45.907551   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:45.907580   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:45.907505   36596 retry.go:31] will retry after 2.79096153s: waiting for machine to come up
	I0930 11:32:48.699904   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:48.700403   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:48.700433   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:48.700358   36596 retry.go:31] will retry after 2.880773223s: waiting for machine to come up
	I0930 11:32:51.582182   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:51.582528   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:51.582553   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:51.582515   36596 retry.go:31] will retry after 3.567167233s: waiting for machine to come up
	I0930 11:32:55.151238   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.151679   34720 main.go:141] libmachine: (ha-033260-m04) Found IP for machine: 192.168.39.104
	I0930 11:32:55.151704   34720 main.go:141] libmachine: (ha-033260-m04) Reserving static IP address...
	I0930 11:32:55.151717   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has current primary IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.152141   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.152161   34720 main.go:141] libmachine: (ha-033260-m04) Reserved static IP address: 192.168.39.104
	I0930 11:32:55.152180   34720 main.go:141] libmachine: (ha-033260-m04) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"}
	I0930 11:32:55.152198   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Getting to WaitForSSH function...
	I0930 11:32:55.152212   34720 main.go:141] libmachine: (ha-033260-m04) Waiting for SSH to be available...
	I0930 11:32:55.154601   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.154955   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.154984   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.155062   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH client type: external
	I0930 11:32:55.155094   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa (-rw-------)
	I0930 11:32:55.155127   34720 main.go:141] libmachine: (ha-033260-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:32:55.155140   34720 main.go:141] libmachine: (ha-033260-m04) DBG | About to run SSH command:
	I0930 11:32:55.155169   34720 main.go:141] libmachine: (ha-033260-m04) DBG | exit 0
	I0930 11:32:55.282203   34720 main.go:141] libmachine: (ha-033260-m04) DBG | SSH cmd err, output: <nil>: 
	I0930 11:32:55.282534   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetConfigRaw
	I0930 11:32:55.283161   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.286073   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286485   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.286510   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286784   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:55.287029   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:32:55.287049   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:55.287272   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.289455   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.289920   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.289948   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.290156   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.290326   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290453   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290576   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.290707   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.290900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.290913   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:32:55.398165   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:32:55.398197   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398448   34720 buildroot.go:166] provisioning hostname "ha-033260-m04"
	I0930 11:32:55.398492   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398697   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.401792   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.402275   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402458   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.402629   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402793   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402918   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.403113   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.403282   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.403294   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m04 && echo "ha-033260-m04" | sudo tee /etc/hostname
	I0930 11:32:55.531966   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m04
	
	I0930 11:32:55.531997   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.535254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535632   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.535675   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535815   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.536008   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536169   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536305   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.536447   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.536613   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.536629   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:55.658892   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:55.658919   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:55.658936   34720 buildroot.go:174] setting up certificates
	I0930 11:32:55.658945   34720 provision.go:84] configureAuth start
	I0930 11:32:55.658953   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.659243   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.662312   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662773   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.662798   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662957   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.665302   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665663   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.665690   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665764   34720 provision.go:143] copyHostCerts
	I0930 11:32:55.665796   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665833   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:55.665842   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665927   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:55.666021   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666039   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:55.666047   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666074   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:55.666119   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666136   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:55.666142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:55.666213   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m04 san=[127.0.0.1 192.168.39.104 ha-033260-m04 localhost minikube]
	I0930 11:32:55.889392   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:55.889469   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:55.889499   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.892080   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892386   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.892413   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892551   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.892776   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.892978   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.893178   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:55.976164   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:55.976265   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:56.003465   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:56.003537   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:32:56.030648   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:56.030726   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:56.059845   34720 provision.go:87] duration metric: took 400.888299ms to configureAuth
	I0930 11:32:56.059878   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:56.060173   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:56.060271   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.063160   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063613   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.063639   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063847   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.064052   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064240   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064367   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.064511   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.064690   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.064709   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:56.291657   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:56.291682   34720 machine.go:96] duration metric: took 1.004640971s to provisionDockerMachine
	I0930 11:32:56.291696   34720 start.go:293] postStartSetup for "ha-033260-m04" (driver="kvm2")
	I0930 11:32:56.291709   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:56.291730   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.292023   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:56.292057   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.294563   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.294915   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.294940   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.295103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.295280   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.295424   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.295532   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.385215   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:56.389877   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:56.389903   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:56.389972   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:56.390073   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:56.390086   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:56.390178   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:56.400442   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:56.429361   34720 start.go:296] duration metric: took 137.644684ms for postStartSetup
	I0930 11:32:56.429427   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.429716   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:56.429741   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.432628   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433129   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.433159   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433319   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.433495   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.433694   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.433867   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.520351   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:56.520411   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:56.579433   34720 fix.go:56] duration metric: took 20.026269147s for fixHost
	I0930 11:32:56.579489   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.582670   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583091   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.583121   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583274   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.583494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583682   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583865   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.584063   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.584279   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.584294   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:56.698854   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695976.655532462
	
	I0930 11:32:56.698887   34720 fix.go:216] guest clock: 1727695976.655532462
	I0930 11:32:56.698900   34720 fix.go:229] Guest: 2024-09-30 11:32:56.655532462 +0000 UTC Remote: 2024-09-30 11:32:56.579461897 +0000 UTC m=+453.306592605 (delta=76.070565ms)
	I0930 11:32:56.698920   34720 fix.go:200] guest clock delta is within tolerance: 76.070565ms
	I0930 11:32:56.698927   34720 start.go:83] releasing machines lock for "ha-033260-m04", held for 20.145784895s
	I0930 11:32:56.698949   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.699224   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:56.702454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.702852   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.702883   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.705376   34720 out.go:177] * Found network options:
	I0930 11:32:56.706947   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	W0930 11:32:56.708247   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708274   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708287   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.708308   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.708969   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709162   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709279   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:56.709323   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	W0930 11:32:56.709360   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709386   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709401   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.709475   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:56.709494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.712173   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712335   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712568   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712592   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712731   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.712845   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712870   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712874   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.712987   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.713033   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.713168   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.713207   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713330   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.934813   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:56.941348   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:56.941419   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:56.960961   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:56.960992   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:56.961081   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:56.980594   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:56.996216   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:56.996273   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:57.013214   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:57.028755   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:57.149354   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:57.318133   34720 docker.go:233] disabling docker service ...
	I0930 11:32:57.318197   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:57.334364   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:57.349711   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:57.496565   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:57.627318   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:57.643513   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:57.667655   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:57.667720   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.680838   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:57.680907   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.693421   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.705291   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.717748   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:57.730805   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.742351   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.761934   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.773112   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:57.783201   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:57.783257   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:57.797812   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:57.813538   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:57.938077   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:58.044521   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:58.044587   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:58.049533   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:58.049596   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:58.053988   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:58.101662   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:58.101732   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.132323   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.163597   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:58.164981   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:58.166271   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:58.167862   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	I0930 11:32:58.169165   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:58.172162   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172529   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:58.172550   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172762   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:58.178993   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.194096   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:58.194385   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.194741   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.194790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.210665   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0930 11:32:58.211101   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.211610   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.211628   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.211954   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.212130   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:58.213485   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:58.213820   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.213854   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.228447   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
	I0930 11:32:58.228877   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.229355   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.229373   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.229837   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.230027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:58.230180   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.104
	I0930 11:32:58.230191   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:58.230204   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:58.230340   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:58.230387   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:58.230397   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:58.230409   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:58.230422   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:58.230434   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:58.230491   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:58.230521   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:58.230531   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:58.230554   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:58.230577   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:58.230597   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:58.230650   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:58.230688   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.230705   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.230732   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.230759   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:58.258115   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:58.284212   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:58.311332   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:58.336428   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:58.362719   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:58.389689   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:58.416593   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:58.423417   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:58.435935   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442361   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442428   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.448829   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:58.461056   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:58.473436   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478046   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478120   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.484917   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:58.497497   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:58.509506   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514695   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514766   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.521000   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:58.533195   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:58.538066   34720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:32:58.538108   34720 kubeadm.go:934] updating node {m04 192.168.39.104 0 v1.31.1 crio false true} ...
	I0930 11:32:58.538196   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:58.538246   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:58.549564   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:58.549678   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0930 11:32:58.561086   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:58.581046   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:58.599680   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:58.603972   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.618040   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.758745   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.778316   34720 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0930 11:32:58.778666   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.780417   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:58.781848   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.954652   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.980788   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:58.981140   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:58.981229   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:58.981531   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:58.981654   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:58.981668   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:58.981678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:58.981682   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:58.985441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.482501   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:59.482522   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.482530   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.482534   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.485809   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.486316   34720 node_ready.go:49] node "ha-033260-m04" has status "Ready":"True"
	I0930 11:32:59.486339   34720 node_ready.go:38] duration metric: took 504.792648ms for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:59.486347   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:59.486421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:59.486437   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.486444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.486448   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.491643   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:59.500880   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.501000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:59.501020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.501033   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.501040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.504511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.505105   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.505120   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.505126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.505130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.508330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.508816   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.508834   34720 pod_ready.go:82] duration metric: took 7.916953ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508846   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508911   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:59.508921   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.508931   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.508940   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.512254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.513133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.513147   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.513157   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.513162   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.516730   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.517273   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.517290   34720 pod_ready.go:82] duration metric: took 8.437165ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517301   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517361   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:59.517370   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.517380   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.517387   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521073   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.521748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.521764   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.521772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.524702   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.525300   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.525316   34720 pod_ready.go:82] duration metric: took 8.008761ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525325   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:59.525383   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.525390   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.525393   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.528314   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.528898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:59.528914   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.528924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.528930   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.531717   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.532229   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.532246   34720 pod_ready.go:82] duration metric: took 6.914296ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.532257   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.682582   34720 request.go:632] Waited for 150.25854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682645   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.682658   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.682662   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.689539   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:59.883130   34720 request.go:632] Waited for 192.41473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.883210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.883232   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.887618   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:59.888108   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.888129   34720 pod_ready.go:82] duration metric: took 355.865471ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.888150   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.083448   34720 request.go:632] Waited for 195.22183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083541   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083549   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.083560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.083571   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.087440   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.283491   34720 request.go:632] Waited for 195.322885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.283590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.283596   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.287218   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.287959   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.287982   34720 pod_ready.go:82] duration metric: took 399.823014ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.287995   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.483353   34720 request.go:632] Waited for 195.279455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483436   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483446   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.483457   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.483468   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.487640   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:00.682537   34720 request.go:632] Waited for 194.177349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682623   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.682632   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.682641   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.686128   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.686721   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.686744   34720 pod_ready.go:82] duration metric: took 398.740461ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.686757   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.882895   34720 request.go:632] Waited for 196.06624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882956   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.882963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.882967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.887704   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.082816   34720 request.go:632] Waited for 194.378573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082908   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.082920   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.082928   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.086938   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.088023   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.088045   34720 pod_ready.go:82] duration metric: took 401.279304ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.088058   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.283083   34720 request.go:632] Waited for 194.957282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283183   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283198   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.283211   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.283221   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.288754   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:33:01.482812   34720 request.go:632] Waited for 193.21938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482876   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482883   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.482895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.482906   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.487184   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.488013   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.488035   34720 pod_ready.go:82] duration metric: took 399.968755ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.488047   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.682796   34720 request.go:632] Waited for 194.675415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682878   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682885   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.682895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.682903   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.687354   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.883473   34720 request.go:632] Waited for 195.37133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883544   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883551   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.883560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.883565   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.887254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.887998   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.888020   34720 pod_ready.go:82] duration metric: took 399.964872ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.888033   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.082969   34720 request.go:632] Waited for 194.870325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083045   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083051   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.083059   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.083071   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.087791   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.283169   34720 request.go:632] Waited for 194.361368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283289   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283304   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.283331   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.283350   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.289541   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:02.290706   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:02.290729   34720 pod_ready.go:82] duration metric: took 402.687198ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.290741   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.483158   34720 request.go:632] Waited for 192.351675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483216   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483222   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.483229   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.483233   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.487135   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:02.683325   34720 request.go:632] Waited for 195.063306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683451   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683485   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.683516   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.683525   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.687678   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.883237   34720 request.go:632] Waited for 92.265907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883323   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883335   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.883343   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.883351   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.887580   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.082785   34720 request.go:632] Waited for 194.294379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082857   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082862   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.082872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.082876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.086700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.291740   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:03.291767   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.291777   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.291783   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.295392   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.483576   34720 request.go:632] Waited for 187.437599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483647   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483655   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.483667   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.483677   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.487588   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.488048   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.488067   34720 pod_ready.go:82] duration metric: took 1.197317957s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.488076   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.683488   34720 request.go:632] Waited for 195.341906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.683590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.683597   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.687625   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.882797   34720 request.go:632] Waited for 194.279012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882884   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882896   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.882906   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.882924   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.886967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.887827   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.887857   34720 pod_ready.go:82] duration metric: took 399.773896ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.887870   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.082926   34720 request.go:632] Waited for 194.972094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083025   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.083037   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.083041   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.087402   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.283534   34720 request.go:632] Waited for 194.922082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283619   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.283626   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.283630   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.287420   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:04.288067   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.288124   34720 pod_ready.go:82] duration metric: took 400.245815ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.288141   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.483212   34720 request.go:632] Waited for 194.995215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483277   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483290   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.483319   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.483325   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.487831   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.682773   34720 request.go:632] Waited for 194.183233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682843   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.682854   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.682858   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.686967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.687793   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.687819   34720 pod_ready.go:82] duration metric: took 399.669055ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.687836   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.882848   34720 request.go:632] Waited for 194.931159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882922   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882930   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.882942   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.882951   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.886911   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.083280   34720 request.go:632] Waited for 195.375329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083376   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083387   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.083398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.083407   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.086880   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.087419   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.087441   34720 pod_ready.go:82] duration metric: took 399.596031ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.087453   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.282500   34720 request.go:632] Waited for 194.956546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282556   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282561   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.282568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.282582   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.285978   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.482968   34720 request.go:632] Waited for 196.156247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483125   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483139   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.483149   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.483155   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.489591   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:05.490240   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.490263   34720 pod_ready.go:82] duration metric: took 402.801252ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.490276   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.683160   34720 request.go:632] Waited for 192.80812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683317   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683345   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.683360   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.683366   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.687330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.883447   34720 request.go:632] Waited for 195.335552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883530   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.883545   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.883553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.887272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.888002   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.888020   34720 pod_ready.go:82] duration metric: took 397.737135ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.888031   34720 pod_ready.go:39] duration metric: took 6.401673703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:33:05.888048   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:33:05.888099   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:33:05.905331   34720 system_svc.go:56] duration metric: took 17.278667ms WaitForService to wait for kubelet
	I0930 11:33:05.905363   34720 kubeadm.go:582] duration metric: took 7.126999309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:33:05.905382   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:33:06.082680   34720 request.go:632] Waited for 177.227376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082733   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082739   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:06.082746   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:06.082751   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:06.087224   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:06.088896   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088918   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088929   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088932   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088935   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088939   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088942   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088945   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088948   34720 node_conditions.go:105] duration metric: took 183.562454ms to run NodePressure ...
	I0930 11:33:06.088959   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:33:06.088977   34720 start.go:255] writing updated cluster config ...
	I0930 11:33:06.089268   34720 ssh_runner.go:195] Run: rm -f paused
	I0930 11:33:06.143377   34720 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:33:06.145486   34720 out.go:177] * Done! kubectl is now configured to use "ha-033260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.726781965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696074726717243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f1a51b5-f23f-4f1b-870c-8cd0b50dfa09 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.727417655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02044865-ddad-443e-b605-d45e74e6360d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.727489775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02044865-ddad-443e-b605-d45e74e6360d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.727855905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02044865-ddad-443e-b605-d45e74e6360d name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.778687764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f40068c3-9a1d-428b-b291-7d35591faa42 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.778815098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f40068c3-9a1d-428b-b291-7d35591faa42 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.780462248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2f3917a-cf08-4311-ae97-657381604a19 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.781069843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696074781036831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2f3917a-cf08-4311-ae97-657381604a19 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.781981904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8485e1d-182d-4230-92b6-7be08eddb33b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.782080330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8485e1d-182d-4230-92b6-7be08eddb33b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.782608334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8485e1d-182d-4230-92b6-7be08eddb33b name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.828028761Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d4564eb-8cfd-48d9-a6dc-548670f26af1 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.828128685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d4564eb-8cfd-48d9-a6dc-548670f26af1 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.829771681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b95b2aca-74ba-4465-86d7-a059411e6bd4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.830232204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696074830193456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b95b2aca-74ba-4465-86d7-a059411e6bd4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.831016361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0dbb462-8ce8-4438-83f9-d81ca96755d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.831089592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0dbb462-8ce8-4438-83f9-d81ca96755d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.831470389Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0dbb462-8ce8-4438-83f9-d81ca96755d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.881882493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=acd2c7b3-b18a-44df-a24a-d869591f6ea1 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.881983719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=acd2c7b3-b18a-44df-a24a-d869591f6ea1 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.883934656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffbcba6e-ec85-4e1e-abce-541848127e30 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.885564722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696074885535544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffbcba6e-ec85-4e1e-abce-541848127e30 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.886268754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0f042a5-587a-41e4-983f-94b427ce881f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.886413242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0f042a5-587a-41e4-983f-94b427ce881f name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:34 ha-033260 crio[1037]: time="2024-09-30 11:34:34.886771768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0f042a5-587a-41e4-983f-94b427ce881f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	88e9d994261ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       5                   d40067a91d083       storage-provisioner
	df3f12d455b8e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   2                   80de34a6f14ca       busybox-7dff88458-nbhwc
	1937cce4ac070       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               2                   40863d7ac6437       kindnet-g94k6
	447147b39349f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago       Running             kube-proxy                2                   96e86b12ad9b7       kube-proxy-mxvxr
	d33c75c18e088       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   2                   74bab7f17b06b       coredns-7c65d6cfc9-kt87v
	88e2f3c9b905b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   2                   f6863e18fb197       coredns-7c65d6cfc9-5frmm
	f4c792280b15b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       4                   d40067a91d083       storage-provisioner
	487866f095e01       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago       Running             kube-controller-manager   4                   1eee82fccc84c       kube-controller-manager-ha-033260
	6ea8bba210502       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago       Running             kube-apiserver            4                   498808de72075       kube-apiserver-ha-033260
	bf743c3bfec10       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     3 minutes ago       Running             kube-vip                  1                   bfb2a9b6e2e5a       kube-vip-ha-033260
	91514ddf1467c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago       Exited              kube-apiserver            3                   498808de72075       kube-apiserver-ha-033260
	b2e1a261e4464       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago       Running             etcd                      2                   5d3f45272bb02       etcd-ha-033260
	fd2ffaa7ff33d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago       Running             kube-scheduler            2                   aeafc6ee55a4d       kube-scheduler-ha-033260
	9f9c8e0b4eb8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago       Exited              kube-controller-manager   3                   1eee82fccc84c       kube-controller-manager-ha-033260
	
	
	==> coredns [88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60977 - 56023 "HINFO IN 6022066924044087929.8494370084378227503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030589997s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1363673838]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.175) (total time: 30002ms):
	Trace[1363673838]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:31:59.176)
	Trace[1363673838]: [30.00230997s] [30.00230997s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1452341617]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30003ms):
	Trace[1452341617]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1452341617]: [30.0032564s] [30.0032564s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1546520065]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1546520065]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1546520065]: [30.002775951s] [30.002775951s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44743 - 60294 "HINFO IN 2203689339262482561.411210931008286347. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030703121s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469308931]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[469308931]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.176)
	Trace[469308931]: [30.002568999s] [30.002568999s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1100740362]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1100740362]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1100740362]: [30.002476509s] [30.002476509s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1653957079]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.176) (total time: 30002ms):
	Trace[1653957079]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.178)
	Trace[1653957079]: [30.002259084s] [30.002259084s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-033260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:31:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-033260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 285e64dc8d10442694303513a400e333
	  System UUID:                285e64dc-8d10-4426-9430-3513a400e333
	  Boot ID:                    819b9c53-0125-4e30-b11d-f0c734cdb490
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbhwc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-5frmm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-7c65d6cfc9-kt87v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-ha-033260                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kindnet-g94k6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	  kube-system                 kube-apiserver-ha-033260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-033260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-mxvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-033260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-033260                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  Starting                 3m5s                   kube-proxy       
	  Normal  Starting                 22m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                    kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                    kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                    kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  NodeReady                22m                    kubelet          Node ha-033260 status is now: NodeReady
	  Normal  RegisteredNode           21m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           20m                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  Starting                 3m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s (x8 over 3m57s)  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x8 over 3m57s)  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x7 over 3m57s)  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           22s                    node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	
	
	Name:               ha-033260-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:12:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-033260-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1504aa96b0e7414e83ec57ce754ea274
	  System UUID:                1504aa96-b0e7-414e-83ec-57ce754ea274
	  Boot ID:                    c982302c-6e81-49de-9ba4-9fad6b0527be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-748nr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ha-033260-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-752cm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-033260-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-033260-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-fckwn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-033260-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-033260-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 21m                    kube-proxy       
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)      kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)      kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)      kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           21m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           20m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  NodeNotReady             18m                    node-controller  Node ha-033260-m02 status is now: NodeNotReady
	  Normal  Starting                 3m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m33s (x8 over 3m34s)  kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m33s (x8 over 3m34s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m33s (x7 over 3m34s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           22s                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	
	
	Name:               ha-033260-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-033260-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 581b37e2b76245bf813ddd1801a6b9a3
	  System UUID:                581b37e2-b762-45bf-813d-dd1801a6b9a3
	  Boot ID:                    0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkczc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ha-033260-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-4rpgw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-033260-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-033260-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-fctld                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-033260-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-033260-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m15s              kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           20m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           20m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           3m11s              node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           3m10s              node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   Starting                 2m32s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m31s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m31s              kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s              kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s              kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m31s              kubelet          Node ha-033260-m03 has been rebooted, boot id: 0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Normal   RegisteredNode           2m11s              node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           22s                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	
	
	Name:               ha-033260-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-033260-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f7e5ab5969e49808de6a4938b82b604
	  System UUID:                3f7e5ab5-969e-4980-8de6-a4938b82b604
	  Boot ID:                    5c8fe13a-3363-443e-bb87-2dda804740af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kb2cp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-cr58q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 92s                kube-proxy       
	  Normal   NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           19m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           19m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           19m                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeReady                19m                kubelet          Node ha-033260-m04 status is now: NodeReady
	  Normal   RegisteredNode           3m11s              node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           3m10s              node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeNotReady             2m31s              node-controller  Node ha-033260-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m11s              node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   Starting                 96s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  96s (x2 over 96s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s (x2 over 96s)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s (x2 over 96s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 96s                kubelet          Node ha-033260-m04 has been rebooted, boot id: 5c8fe13a-3363-443e-bb87-2dda804740af
	  Normal   NodeReady                96s                kubelet          Node ha-033260-m04 status is now: NodeReady
	  Normal   RegisteredNode           22s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	
	
	Name:               ha-033260-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_34_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:34:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    ha-033260-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b0444cb38648a79a2155d7dbdd1774
	  System UUID:                82b0444c-b386-48a7-9a21-55d7dbdd1774
	  Boot ID:                    26c588a2-1adf-44af-9d60-2a708fb03f44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-033260-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         30s
	  kube-system                 kindnet-9bn6h                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      32s
	  kube-system                 kube-apiserver-ha-033260-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-ha-033260-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-6ddjb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-ha-033260-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-vip-ha-033260-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  33s (x8 over 33s)  kubelet          Node ha-033260-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 33s)  kubelet          Node ha-033260-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x7 over 33s)  kubelet          Node ha-033260-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	  Normal  RegisteredNode           31s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	  Normal  RegisteredNode           30s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	  Normal  RegisteredNode           22s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	
	
	==> dmesg <==
	[Sep30 11:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051485] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.894871] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.799819] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.926902] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.063947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060890] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	[  +0.189706] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.143881] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.315063] systemd-fstab-generator[1028]: Ignoring "noauto" option for root device
	[  +4.231701] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.066662] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.898522] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.432816] kauditd_printk_skb: 40 callbacks suppressed
	[Sep30 11:31] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199] <==
	{"level":"info","ts":"2024-09-30T11:34:03.220627Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547","added-peer-id":"182fb6b050f82820","added-peer-peer-urls":["https://192.168.39.146:2380"]}
	{"level":"info","ts":"2024-09-30T11:34:03.220703Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.220755Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.225150Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.225266Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820","remote-peer-urls":["https://192.168.39.146:2380"]}
	{"level":"info","ts":"2024-09-30T11:34:03.227877Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.228212Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.228481Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.229273Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"warn","ts":"2024-09-30T11:34:03.396992Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-09-30T11:34:04.392189Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-30T11:34:04.799407Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"182fb6b050f82820","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-30T11:34:04.799456Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:04.799510Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:04.809420Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"182fb6b050f82820","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-30T11:34:04.809478Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"warn","ts":"2024-09-30T11:34:04.883795Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-30T11:34:04.924487Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:04.926745Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"warn","ts":"2024-09-30T11:34:05.051491Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.146:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-30T11:34:05.070541Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.146:49184","server-name":"","error":"read tcp 192.168.39.249:2380->192.168.39.146:49184: read: connection reset by peer"}
	{"level":"warn","ts":"2024-09-30T11:34:05.880660Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-30T11:34:06.382681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 switched to configuration voters=(1742812449204611104 2179423914693294938 3571047793177318727 18390992626900585602)"}
	{"level":"info","ts":"2024-09-30T11:34:06.382967Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547"}
	{"level":"info","ts":"2024-09-30T11:34:06.383056Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"318ee90c3446d547","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"182fb6b050f82820"}
	
	
	==> kernel <==
	 11:34:35 up 4 min,  0 users,  load average: 0.16, 0.24, 0.11
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe] <==
	I0930 11:34:10.501805       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:34:10.501972       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:34:10.502027       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:34:10.502174       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:34:10.502241       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:34:20.503444       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:34:20.503553       1 main.go:299] handling current node
	I0930 11:34:20.503584       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:34:20.503602       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:34:20.503752       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:34:20.503806       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:34:20.503918       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:34:20.503956       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:34:20.504030       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0930 11:34:20.504050       1 main.go:322] Node ha-033260-m05 has CIDR [10.244.4.0/24] 
	I0930 11:34:30.500054       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:34:30.500578       1 main.go:299] handling current node
	I0930 11:34:30.500677       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:34:30.500727       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:34:30.501004       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:34:30.501041       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:34:30.501140       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:34:30.501166       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:34:30.501254       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0930 11:34:30.501285       1 main.go:322] Node ha-033260-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c] <==
	I0930 11:31:21.381575       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0930 11:31:21.538562       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 11:31:21.543182       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:31:21.543721       1 policy_source.go:224] refreshing policies
	I0930 11:31:21.579575       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:31:21.579665       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 11:31:21.580585       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 11:31:21.581145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 11:31:21.581189       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 11:31:21.579601       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 11:31:21.579657       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 11:31:21.581999       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 11:31:21.582037       1 aggregator.go:171] initial CRD sync complete...
	I0930 11:31:21.582044       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 11:31:21.582048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 11:31:21.582053       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:31:21.586437       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0930 11:31:21.607643       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238]
	I0930 11:31:21.609050       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 11:31:21.622457       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0930 11:31:21.631794       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0930 11:31:21.643397       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:31:22.390935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0930 11:31:22.949170       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.249]
	W0930 11:31:42.954664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249 192.168.39.3]
	
	
	==> kube-apiserver [91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1] <==
	I0930 11:30:45.187556       1 options.go:228] external host was not specified, using 192.168.39.249
	I0930 11:30:45.195121       1 server.go:142] Version: v1.31.1
	I0930 11:30:45.195252       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.676469       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 11:30:46.702385       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:30:46.710100       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 11:30:46.716179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 11:30:46.716589       1 instance.go:232] Using reconciler: lease
	W0930 11:31:06.661936       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.662284       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.717971       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0930 11:31:06.718008       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a] <==
	I0930 11:32:59.248647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:59.651723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:33:29.535408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:34:03.020603       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-033260-m05\" does not exist"
	I0930 11:34:03.024566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:34:03.049032       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-033260-m05" podCIDRs=["10.244.4.0/24"]
	I0930 11:34:03.049178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:03.049244       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:03.070085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:03.116559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:04.752051       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-033260-m05"
	I0930 11:34:04.778067       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:05.552860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:05.654561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:07.053887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:07.684022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:07.832721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:13.147655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:13.258108       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:13.396911       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:23.927584       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:34:23.927600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:23.948773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:24.684227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:33.535177       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	
	
	==> kube-controller-manager [9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438] <==
	I0930 11:30:45.993698       1 serving.go:386] Generated self-signed cert in-memory
	I0930 11:30:46.957209       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 11:30:46.957296       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.962662       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 11:30:46.963278       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:30:46.963571       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:30:46.963743       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0930 11:31:21.471526       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:31:29.611028       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:31:29.650081       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0930 11:31:29.650432       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:31:29.730719       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:31:29.730781       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:31:29.730811       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:31:29.734900       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:31:29.735864       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:31:29.735899       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:31:29.738688       1 config.go:199] "Starting service config controller"
	I0930 11:31:29.738986       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:31:29.739407       1 config.go:328] "Starting node config controller"
	I0930 11:31:29.739433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:31:29.739913       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:31:29.743750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:31:29.743822       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:31:29.840409       1 shared_informer.go:320] Caches are synced for node config
	I0930 11:31:29.840462       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40] <==
	W0930 11:31:21.480661       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 11:31:21.480791       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 11:31:23.035263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:34:03.144301       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-gmtjt\": pod kube-proxy-gmtjt is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-gmtjt" node="ha-033260-m05"
	E0930 11:34:03.147570       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 73f4f491-bf2e-4e17-a8b4-b0908b01186a(kube-system/kube-proxy-gmtjt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gmtjt"
	E0930 11:34:03.151193       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gmtjt\": pod kube-proxy-gmtjt is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-gmtjt"
	I0930 11:34:03.153223       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gmtjt" node="ha-033260-m05"
	E0930 11:34:03.153592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-knb8g\": pod kindnet-knb8g is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-knb8g" node="ha-033260-m05"
	E0930 11:34:03.157433       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3a889bb1-8d4f-409c-956d-1dfc1466b1c4(kube-system/kindnet-knb8g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-knb8g"
	E0930 11:34:03.157542       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-knb8g\": pod kindnet-knb8g is already assigned to node \"ha-033260-m05\"" pod="kube-system/kindnet-knb8g"
	I0930 11:34:03.157610       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-knb8g" node="ha-033260-m05"
	E0930 11:34:03.147155       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z97cm\": pod kube-proxy-z97cm is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z97cm" node="ha-033260-m05"
	E0930 11:34:03.157746       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15112029-30e1-4a61-a241-dfbb2dab99e9(kube-system/kube-proxy-z97cm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z97cm"
	E0930 11:34:03.157754       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z97cm\": pod kube-proxy-z97cm is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-z97cm"
	I0930 11:34:03.157815       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z97cm" node="ha-033260-m05"
	E0930 11:34:05.465774       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6ddjb\": pod kube-proxy-6ddjb is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6ddjb" node="ha-033260-m05"
	E0930 11:34:05.465973       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6ddjb\": pod kube-proxy-6ddjb is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-6ddjb"
	E0930 11:34:05.467782       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kp6dd\": pod kube-proxy-kp6dd is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kp6dd" node="ha-033260-m05"
	E0930 11:34:05.467827       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1ab922ab-22f6-4421-be70-d9d33fb156f7(kube-system/kube-proxy-kp6dd) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kp6dd"
	E0930 11:34:05.467846       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kp6dd\": pod kube-proxy-kp6dd is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-kp6dd"
	I0930 11:34:05.467865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kp6dd" node="ha-033260-m05"
	E0930 11:34:05.468295       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lnh54\": pod kube-proxy-lnh54 is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lnh54" node="ha-033260-m05"
	E0930 11:34:05.468372       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0db280d3-c9b7-4101-a094-2d3ab3b46285(kube-system/kube-proxy-lnh54) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lnh54"
	E0930 11:34:05.468394       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lnh54\": pod kube-proxy-lnh54 is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-lnh54"
	I0930 11:34:05.468417       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lnh54" node="ha-033260-m05"
	
	
	==> kubelet <==
	Sep 30 11:32:58 ha-033260 kubelet[1140]: E0930 11:32:58.071041    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695978069834586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:32:58 ha-033260 kubelet[1140]: E0930 11:32:58.071099    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695978069834586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:08 ha-033260 kubelet[1140]: E0930 11:33:08.077457    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988076772880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:08 ha-033260 kubelet[1140]: E0930 11:33:08.077518    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695988076772880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:18 ha-033260 kubelet[1140]: E0930 11:33:18.078987    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695998078667223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:18 ha-033260 kubelet[1140]: E0930 11:33:18.079433    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727695998078667223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:28 ha-033260 kubelet[1140]: E0930 11:33:28.081041    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696008080579211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:28 ha-033260 kubelet[1140]: E0930 11:33:28.081579    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696008080579211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:38 ha-033260 kubelet[1140]: E0930 11:33:38.056742    1140 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:33:38 ha-033260 kubelet[1140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:33:38 ha-033260 kubelet[1140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:33:38 ha-033260 kubelet[1140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:33:38 ha-033260 kubelet[1140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:33:38 ha-033260 kubelet[1140]: E0930 11:33:38.083473    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696018083201135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:38 ha-033260 kubelet[1140]: E0930 11:33:38.083500    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696018083201135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:48 ha-033260 kubelet[1140]: E0930 11:33:48.086188    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696028084875013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:48 ha-033260 kubelet[1140]: E0930 11:33:48.086221    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696028084875013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:58 ha-033260 kubelet[1140]: E0930 11:33:58.088121    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696038087738311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:58 ha-033260 kubelet[1140]: E0930 11:33:58.088151    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696038087738311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:08 ha-033260 kubelet[1140]: E0930 11:34:08.092772    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696048092260165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:08 ha-033260 kubelet[1140]: E0930 11:34:08.092834    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696048092260165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:18 ha-033260 kubelet[1140]: E0930 11:34:18.095537    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696058094945164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:18 ha-033260 kubelet[1140]: E0930 11:34:18.095643    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696058094945164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:28 ha-033260 kubelet[1140]: E0930 11:34:28.099890    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696068097917010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:28 ha-033260 kubelet[1140]: E0930 11:34:28.100222    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696068097917010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:261: (dbg) Run:  kubectl --context ha-033260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (83.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.109179427s)
ha_test.go:304: expected profile "ha-033260" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-033260\",\"Status\":\"OKHAppy\",\"Config\":{\"Name\":\"ha-033260\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPor
t\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-033260\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"KubernetesVersion\":
\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.238\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.104\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.39.146\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\
"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount
\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-033260" in json of 'profile list' to have "HAppy" status but have "OKHAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-033260\",\"Status\":\"OKHAppy\",\"Config\":{\"Name\":\"ha-033260\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-033260\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.249\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.238\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.104\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.39.146\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-
dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":946080000000
00000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-033260 -n ha-033260
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-033260 logs -n 25: (1.774808845s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m04 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp testdata/cp-test.txt                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt                       |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260 sudo cat                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260.txt                                 |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m02 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | ha-033260-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-033260 ssh -n ha-033260-m03 sudo cat                                          | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC | 30 Sep 24 11:15 UTC |
	|         | /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-033260 node stop m02 -v=7                                                     | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-033260 node start m02 -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260 -v=7                                                           | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-033260 -v=7                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true -v=7                                                    | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-033260                                                                | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	| node    | ha-033260 node delete m03 -v=7                                                   | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-033260 stop -v=7                                                              | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-033260 --wait=true                                                         | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:25 UTC | 30 Sep 24 11:33 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	| node    | add -p ha-033260                                                                 | ha-033260 | jenkins | v1.34.0 | 30 Sep 24 11:33 UTC | 30 Sep 24 11:34 UTC |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:25:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:25:23.307171   34720 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:25:23.307438   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307448   34720 out.go:358] Setting ErrFile to fd 2...
	I0930 11:25:23.307454   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:25:23.307638   34720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:25:23.308189   34720 out.go:352] Setting JSON to false
	I0930 11:25:23.309088   34720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4070,"bootTime":1727691453,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:25:23.309188   34720 start.go:139] virtualization: kvm guest
	I0930 11:25:23.312163   34720 out.go:177] * [ha-033260] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:25:23.313387   34720 notify.go:220] Checking for updates...
	I0930 11:25:23.313393   34720 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:25:23.314778   34720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:25:23.316338   34720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:25:23.317962   34720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:25:23.319385   34720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:25:23.320813   34720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:25:23.322948   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:25:23.323340   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.323412   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.338759   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41721
	I0930 11:25:23.339192   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.339786   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.339807   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.340136   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.340346   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.340572   34720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:25:23.340857   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.340891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.355777   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
	I0930 11:25:23.356254   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.356744   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.356763   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.357120   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.357292   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.393653   34720 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:25:23.394968   34720 start.go:297] selected driver: kvm2
	I0930 11:25:23.394986   34720 start.go:901] validating driver "kvm2" against &{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false
efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.395148   34720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:25:23.395486   34720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.395574   34720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:25:23.411100   34720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:25:23.411834   34720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:25:23.411865   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:25:23.411907   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:25:23.411964   34720 start.go:340] cluster config:
	{Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:25:23.412098   34720 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:25:23.413851   34720 out.go:177] * Starting "ha-033260" primary control-plane node in "ha-033260" cluster
	I0930 11:25:23.415381   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:25:23.415422   34720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:25:23.415429   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:25:23.415534   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:25:23.415546   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:25:23.415667   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:25:23.415859   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:25:23.415901   34720 start.go:364] duration metric: took 23.767µs to acquireMachinesLock for "ha-033260"
	I0930 11:25:23.415913   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:25:23.415920   34720 fix.go:54] fixHost starting: 
	I0930 11:25:23.416165   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:25:23.416196   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:25:23.430823   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42789
	I0930 11:25:23.431277   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:25:23.431704   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:25:23.431723   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:25:23.432018   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:25:23.432228   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.432375   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:25:23.433975   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Running err=<nil>
	W0930 11:25:23.434007   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:25:23.436150   34720 out.go:177] * Updating the running kvm2 "ha-033260" VM ...
	I0930 11:25:23.437473   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:25:23.437494   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:25:23.437753   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:25:23.440392   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.440831   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:11:31 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:25:23.440858   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:25:23.441041   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:25:23.441214   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441380   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:25:23.441502   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:25:23.441655   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:25:23.441833   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:25:23.441844   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:25:26.337999   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:29.409914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:35.489955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:38.561928   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:44.641887   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:47.713916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:53.793988   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:25:56.865946   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:10.017864   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:16.097850   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:19.169940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:25.249934   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:28.321888   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:34.401910   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:37.473948   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:43.553872   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:46.625911   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:52.705908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:26:55.777884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:01.857921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:04.929922   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:11.009956   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:14.081936   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:20.161884   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:23.233917   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:29.313903   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:32.385985   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:38.465815   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:41.537920   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:47.617898   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:50.689890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:56.769908   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:27:59.841901   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:05.921893   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:08.993941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:15.073913   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:18.145943   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:24.225916   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:27.297994   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:33.377803   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:36.449892   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:42.529904   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:45.601915   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:51.681921   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:28:54.753890   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:00.833932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:03.905924   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:09.985909   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:13.057955   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:19.137932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:22.209941   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:28.289972   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:31.361973   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:37.441940   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:40.513906   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:46.593938   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:49.665931   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:55.745914   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:29:58.817932   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:04.897939   34720 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.249:22: connect: no route to host
	I0930 11:30:07.900098   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:07.900146   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900476   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:07.900498   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:07.900690   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:07.902604   34720 machine.go:96] duration metric: took 4m44.465113929s to provisionDockerMachine
	I0930 11:30:07.902642   34720 fix.go:56] duration metric: took 4m44.486721557s for fixHost
	I0930 11:30:07.902649   34720 start.go:83] releasing machines lock for "ha-033260", held for 4m44.486740655s
	W0930 11:30:07.902664   34720 start.go:714] error starting host: provision: host is not running
	W0930 11:30:07.902739   34720 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0930 11:30:07.902751   34720 start.go:729] Will try again in 5 seconds ...
	I0930 11:30:12.906532   34720 start.go:360] acquireMachinesLock for ha-033260: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:12.906673   34720 start.go:364] duration metric: took 71.92µs to acquireMachinesLock for "ha-033260"
	I0930 11:30:12.906700   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:12.906710   34720 fix.go:54] fixHost starting: 
	I0930 11:30:12.906980   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:12.907012   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:12.922017   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0930 11:30:12.922407   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:12.922875   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:12.922898   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:12.923192   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:12.923373   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:12.923532   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:30:12.925123   34720 fix.go:112] recreateIfNeeded on ha-033260: state=Stopped err=<nil>
	I0930 11:30:12.925146   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	W0930 11:30:12.925301   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:12.927074   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260" ...
	I0930 11:30:12.928250   34720 main.go:141] libmachine: (ha-033260) Calling .Start
	I0930 11:30:12.928414   34720 main.go:141] libmachine: (ha-033260) Ensuring networks are active...
	I0930 11:30:12.929185   34720 main.go:141] libmachine: (ha-033260) Ensuring network default is active
	I0930 11:30:12.929536   34720 main.go:141] libmachine: (ha-033260) Ensuring network mk-ha-033260 is active
	I0930 11:30:12.929877   34720 main.go:141] libmachine: (ha-033260) Getting domain xml...
	I0930 11:30:12.930569   34720 main.go:141] libmachine: (ha-033260) Creating domain...
	I0930 11:30:14.153271   34720 main.go:141] libmachine: (ha-033260) Waiting to get IP...
	I0930 11:30:14.154287   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.154676   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.154756   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.154665   35728 retry.go:31] will retry after 246.651231ms: waiting for machine to come up
	I0930 11:30:14.403231   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.403674   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.403727   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.403659   35728 retry.go:31] will retry after 262.960523ms: waiting for machine to come up
	I0930 11:30:14.668247   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:14.668711   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:14.668739   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:14.668675   35728 retry.go:31] will retry after 381.564783ms: waiting for machine to come up
	I0930 11:30:15.052320   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.052821   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.052846   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.052760   35728 retry.go:31] will retry after 588.393032ms: waiting for machine to come up
	I0930 11:30:15.642361   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:15.642772   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:15.642801   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:15.642723   35728 retry.go:31] will retry after 588.302425ms: waiting for machine to come up
	I0930 11:30:16.232721   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:16.233152   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:16.233171   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:16.233111   35728 retry.go:31] will retry after 770.742378ms: waiting for machine to come up
	I0930 11:30:17.005248   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:17.005687   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:17.005718   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:17.005645   35728 retry.go:31] will retry after 1.118737809s: waiting for machine to come up
	I0930 11:30:18.126316   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:18.126728   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:18.126755   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:18.126678   35728 retry.go:31] will retry after 1.317343847s: waiting for machine to come up
	I0930 11:30:19.446227   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:19.446785   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:19.446810   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:19.446709   35728 retry.go:31] will retry after 1.309700527s: waiting for machine to come up
	I0930 11:30:20.758241   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:20.758680   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:20.758702   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:20.758651   35728 retry.go:31] will retry after 1.521862953s: waiting for machine to come up
	I0930 11:30:22.282731   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:22.283205   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:22.283242   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:22.283159   35728 retry.go:31] will retry after 2.906878377s: waiting for machine to come up
	I0930 11:30:25.192687   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:25.193133   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:25.193170   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:25.193111   35728 retry.go:31] will retry after 2.807596314s: waiting for machine to come up
	I0930 11:30:28.002489   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:28.002972   34720 main.go:141] libmachine: (ha-033260) DBG | unable to find current IP address of domain ha-033260 in network mk-ha-033260
	I0930 11:30:28.003005   34720 main.go:141] libmachine: (ha-033260) DBG | I0930 11:30:28.002951   35728 retry.go:31] will retry after 2.762675727s: waiting for machine to come up
	I0930 11:30:30.769002   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.769600   34720 main.go:141] libmachine: (ha-033260) Found IP for machine: 192.168.39.249
	I0930 11:30:30.769647   34720 main.go:141] libmachine: (ha-033260) Reserving static IP address...
	I0930 11:30:30.769660   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has current primary IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.770061   34720 main.go:141] libmachine: (ha-033260) Reserved static IP address: 192.168.39.249
	I0930 11:30:30.770097   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.770113   34720 main.go:141] libmachine: (ha-033260) Waiting for SSH to be available...
	I0930 11:30:30.770138   34720 main.go:141] libmachine: (ha-033260) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260", mac: "52:54:00:b0:2e:07", ip: "192.168.39.249"}
	I0930 11:30:30.770150   34720 main.go:141] libmachine: (ha-033260) DBG | Getting to WaitForSSH function...
	I0930 11:30:30.772370   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.772760   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.772873   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH client type: external
	I0930 11:30:30.772897   34720 main.go:141] libmachine: (ha-033260) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa (-rw-------)
	I0930 11:30:30.772957   34720 main.go:141] libmachine: (ha-033260) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:30.772978   34720 main.go:141] libmachine: (ha-033260) DBG | About to run SSH command:
	I0930 11:30:30.772991   34720 main.go:141] libmachine: (ha-033260) DBG | exit 0
	I0930 11:30:30.902261   34720 main.go:141] libmachine: (ha-033260) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:30.902682   34720 main.go:141] libmachine: (ha-033260) Calling .GetConfigRaw
	I0930 11:30:30.903345   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:30.905986   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906435   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.906466   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.906792   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:30.907003   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:30.907027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:30.907234   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:30.909474   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.909877   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:30.909908   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:30.910031   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:30.910192   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910303   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:30.910430   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:30.910552   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:30.910754   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:30.910767   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:31.026522   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:31.026555   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.026772   34720 buildroot.go:166] provisioning hostname "ha-033260"
	I0930 11:30:31.026799   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.027007   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.029600   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.029965   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.029992   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.030147   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.030327   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030457   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.030592   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.030726   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.030900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.030913   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260 && echo "ha-033260" | sudo tee /etc/hostname
	I0930 11:30:31.158417   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260
	
	I0930 11:30:31.158470   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.161439   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.161861   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.161898   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.162135   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.162317   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162476   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.162595   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.162742   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.162897   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.162912   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:31.283806   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:31.283837   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:31.283864   34720 buildroot.go:174] setting up certificates
	I0930 11:30:31.283877   34720 provision.go:84] configureAuth start
	I0930 11:30:31.283888   34720 main.go:141] libmachine: (ha-033260) Calling .GetMachineName
	I0930 11:30:31.284156   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:31.287095   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287561   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.287586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.287860   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.290260   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290610   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.290638   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.290768   34720 provision.go:143] copyHostCerts
	I0930 11:30:31.290802   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290847   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:31.290855   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:31.290923   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:31.291012   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291029   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:31.291036   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:31.291062   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:31.291116   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291138   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:31.291144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:31.291169   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:31.291235   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260 san=[127.0.0.1 192.168.39.249 ha-033260 localhost minikube]
	I0930 11:30:31.357378   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:31.357434   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:31.357461   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.360265   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360612   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.360639   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.360895   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.361087   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.361219   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.361344   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.448948   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:31.449019   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:31.478937   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:31.479012   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0930 11:30:31.509585   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:31.509668   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:31.539539   34720 provision.go:87] duration metric: took 255.649967ms to configureAuth
	I0930 11:30:31.539565   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:31.539759   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:31.539826   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.542626   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543038   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.543072   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.543261   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.543501   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543644   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.543761   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.543949   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:31.544136   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:31.544151   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:31.800600   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:31.800624   34720 machine.go:96] duration metric: took 893.601125ms to provisionDockerMachine
	I0930 11:30:31.800638   34720 start.go:293] postStartSetup for "ha-033260" (driver="kvm2")
	I0930 11:30:31.800650   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:31.800670   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.801007   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:31.801030   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.803813   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804193   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.804222   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.804441   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.804604   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.804769   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.804939   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:31.893164   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:31.898324   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:31.898349   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:31.898488   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:31.898642   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:31.898657   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:31.898771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:31.909611   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:31.940213   34720 start.go:296] duration metric: took 139.562436ms for postStartSetup
	I0930 11:30:31.940253   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:31.940567   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:31.940600   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:31.943464   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.943880   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:31.943909   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:31.944048   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:31.944346   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:31.944569   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:31.944768   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.028986   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:32.029069   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:32.087362   34720 fix.go:56] duration metric: took 19.180639105s for fixHost
	I0930 11:30:32.087405   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.090539   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.090962   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.090988   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.091151   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.091371   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091585   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.091707   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.091851   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:32.092025   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0930 11:30:32.092044   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:32.206950   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695832.171402259
	
	I0930 11:30:32.206975   34720 fix.go:216] guest clock: 1727695832.171402259
	I0930 11:30:32.206982   34720 fix.go:229] Guest: 2024-09-30 11:30:32.171402259 +0000 UTC Remote: 2024-09-30 11:30:32.087388641 +0000 UTC m=+308.814519334 (delta=84.013618ms)
	I0930 11:30:32.207008   34720 fix.go:200] guest clock delta is within tolerance: 84.013618ms
	I0930 11:30:32.207014   34720 start.go:83] releasing machines lock for "ha-033260", held for 19.300329364s
	I0930 11:30:32.207037   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.207322   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:32.209968   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210394   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.210419   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.210638   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211106   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211267   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:30:32.211375   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:32.211419   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.211462   34720 ssh_runner.go:195] Run: cat /version.json
	I0930 11:30:32.211487   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:30:32.213826   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214176   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214200   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214221   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214463   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.214607   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.214713   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.214734   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:32.214757   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:32.214877   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.214902   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:30:32.215061   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:30:32.215198   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:30:32.215320   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:30:32.318873   34720 ssh_runner.go:195] Run: systemctl --version
	I0930 11:30:32.325516   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:32.483433   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:32.489924   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:32.489999   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:32.509691   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:32.509716   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:32.509773   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:32.529220   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:32.544880   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:32.544953   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:32.561347   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:32.576185   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:32.696192   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:32.856000   34720 docker.go:233] disabling docker service ...
	I0930 11:30:32.856061   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:32.872115   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:32.886462   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:33.019718   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:33.149810   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:33.165943   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:33.188911   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:33.188984   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.202121   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:33.202191   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.214960   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.227336   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.239366   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:33.251818   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.264121   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.285246   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:33.297242   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:30:33.307951   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:30:33.308020   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:30:33.324031   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:30:33.335459   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:33.464418   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:30:33.563219   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:30:33.563313   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:30:33.568915   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:30:33.568982   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:30:33.575600   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:30:33.617027   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:30:33.617123   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.651093   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:30:33.682607   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:30:33.684108   34720 main.go:141] libmachine: (ha-033260) Calling .GetIP
	I0930 11:30:33.687198   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687568   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:30:33.687586   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:30:33.687860   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:30:33.692422   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:33.706358   34720 kubeadm.go:883] updating cluster {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:30:33.706513   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:33.706553   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:33.741648   34720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 11:30:33.741712   34720 ssh_runner.go:195] Run: which lz4
	I0930 11:30:33.746514   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0930 11:30:33.746605   34720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:30:33.751033   34720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:30:33.751094   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 11:30:35.211096   34720 crio.go:462] duration metric: took 1.464517464s to copy over tarball
	I0930 11:30:35.211178   34720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:30:37.290495   34720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.079293521s)
	I0930 11:30:37.290519   34720 crio.go:469] duration metric: took 2.079396835s to extract the tarball
	I0930 11:30:37.290526   34720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:30:37.328103   34720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:30:37.375779   34720 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:30:37.375803   34720 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:30:37.375810   34720 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.31.1 crio true true} ...
	I0930 11:30:37.375919   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:30:37.376009   34720 ssh_runner.go:195] Run: crio config
	I0930 11:30:37.430483   34720 cni.go:84] Creating CNI manager for ""
	I0930 11:30:37.430505   34720 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0930 11:30:37.430513   34720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:30:37.430534   34720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-033260 NodeName:ha-033260 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:30:37.430658   34720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-033260"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:30:37.430678   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:30:37.430719   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:30:37.447824   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:30:37.447927   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:30:37.447977   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:30:37.458530   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:30:37.458608   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0930 11:30:37.469126   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0930 11:30:37.487666   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:30:37.505980   34720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0930 11:30:37.524942   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:30:37.543099   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:30:37.547174   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:30:37.560565   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:30:37.703633   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:30:37.722433   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.249
	I0930 11:30:37.722455   34720 certs.go:194] generating shared ca certs ...
	I0930 11:30:37.722471   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:37.722631   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:30:37.722669   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:30:37.722678   34720 certs.go:256] generating profile certs ...
	I0930 11:30:37.722756   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:30:37.722813   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.286728c8
	I0930 11:30:37.722850   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:30:37.722861   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:30:37.722873   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:30:37.722886   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:30:37.722898   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:30:37.722909   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:30:37.722931   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:30:37.722944   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:30:37.722956   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:30:37.723015   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:30:37.723047   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:30:37.723058   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:30:37.723082   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:30:37.723107   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:30:37.723127   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:30:37.723167   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:37.723194   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:37.723207   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:30:37.723219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:30:37.723778   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:30:37.765086   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:30:37.796973   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:30:37.825059   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:30:37.855521   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0930 11:30:37.899131   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:30:37.930900   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:30:37.980558   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:30:38.038804   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:30:38.087704   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:30:38.115070   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:30:38.143055   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:30:38.165228   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:30:38.181120   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:30:38.193472   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199554   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.199622   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:30:38.206544   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:30:38.218674   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:30:38.230696   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235800   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.235869   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:30:38.242027   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:30:38.253962   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:30:38.265695   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270860   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.270930   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:30:38.277134   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:30:38.288946   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:30:38.294078   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:30:38.300823   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:30:38.307442   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:30:38.314085   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:30:38.320482   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:30:38.327174   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:30:38.333995   34720 kubeadm.go:392] StartCluster: {Name:ha-033260 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:30:38.334150   34720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:30:38.334251   34720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:30:38.372351   34720 cri.go:89] found id: ""
	I0930 11:30:38.372413   34720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:30:38.383026   34720 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 11:30:38.383043   34720 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 11:30:38.383100   34720 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 11:30:38.394015   34720 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:30:38.394528   34720 kubeconfig.go:125] found "ha-033260" server: "https://192.168.39.254:8443"
	I0930 11:30:38.394558   34720 kubeconfig.go:47] verify endpoint returned: got: 192.168.39.254:8443, want: 192.168.39.249:8443
	I0930 11:30:38.394772   34720 kubeconfig.go:62] /home/jenkins/minikube-integration/19734-3842/kubeconfig needs updating (will repair): [kubeconfig needs server address update]
	I0930 11:30:38.395022   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.395487   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.395704   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:30:38.396149   34720 cert_rotation.go:140] Starting client certificate rotation controller
	I0930 11:30:38.396377   34720 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 11:30:38.407784   34720 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.249
	I0930 11:30:38.407813   34720 kubeadm.go:597] duration metric: took 24.764144ms to restartPrimaryControlPlane
	I0930 11:30:38.407821   34720 kubeadm.go:394] duration metric: took 73.840194ms to StartCluster
	I0930 11:30:38.407838   34720 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.407924   34720 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:30:38.408750   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:30:38.409039   34720 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:30:38.409099   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:30:38.409119   34720 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:30:38.409305   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.411175   34720 out.go:177] * Enabled addons: 
	I0930 11:30:38.412776   34720 addons.go:510] duration metric: took 3.663147ms for enable addons: enabled=[]
	I0930 11:30:38.412820   34720 start.go:246] waiting for cluster config update ...
	I0930 11:30:38.412828   34720 start.go:255] writing updated cluster config ...
	I0930 11:30:38.414670   34720 out.go:201] 
	I0930 11:30:38.416408   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:38.416501   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.418474   34720 out.go:177] * Starting "ha-033260-m02" control-plane node in "ha-033260" cluster
	I0930 11:30:38.419875   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:30:38.419902   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:30:38.420019   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:30:38.420031   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:30:38.420138   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:38.420331   34720 start.go:360] acquireMachinesLock for ha-033260-m02: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:30:38.420373   34720 start.go:364] duration metric: took 24.584µs to acquireMachinesLock for "ha-033260-m02"
	I0930 11:30:38.420384   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:30:38.420389   34720 fix.go:54] fixHost starting: m02
	I0930 11:30:38.420682   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:30:38.420704   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:30:38.436048   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0930 11:30:38.436591   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:30:38.437106   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:30:38.437129   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:30:38.437434   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:30:38.437608   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:38.437762   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetState
	I0930 11:30:38.439609   34720 fix.go:112] recreateIfNeeded on ha-033260-m02: state=Stopped err=<nil>
	I0930 11:30:38.439637   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	W0930 11:30:38.439785   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:30:38.443504   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m02" ...
	I0930 11:30:38.445135   34720 main.go:141] libmachine: (ha-033260-m02) Calling .Start
	I0930 11:30:38.445476   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring networks are active...
	I0930 11:30:38.446588   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network default is active
	I0930 11:30:38.447039   34720 main.go:141] libmachine: (ha-033260-m02) Ensuring network mk-ha-033260 is active
	I0930 11:30:38.447376   34720 main.go:141] libmachine: (ha-033260-m02) Getting domain xml...
	I0930 11:30:38.448426   34720 main.go:141] libmachine: (ha-033260-m02) Creating domain...
	I0930 11:30:39.710879   34720 main.go:141] libmachine: (ha-033260-m02) Waiting to get IP...
	I0930 11:30:39.711874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.712365   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.712441   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.712367   35943 retry.go:31] will retry after 217.001727ms: waiting for machine to come up
	I0930 11:30:39.931176   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:39.931746   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:39.931795   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:39.931690   35943 retry.go:31] will retry after 360.379717ms: waiting for machine to come up
	I0930 11:30:40.293305   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.293927   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.293956   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.293884   35943 retry.go:31] will retry after 440.189289ms: waiting for machine to come up
	I0930 11:30:40.735666   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:40.736141   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:40.736171   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:40.736077   35943 retry.go:31] will retry after 458.690004ms: waiting for machine to come up
	I0930 11:30:41.196951   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.197392   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.197421   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.197336   35943 retry.go:31] will retry after 554.052986ms: waiting for machine to come up
	I0930 11:30:41.753199   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:41.753680   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:41.753707   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:41.753643   35943 retry.go:31] will retry after 931.699083ms: waiting for machine to come up
	I0930 11:30:42.686931   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:42.687320   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:42.687351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:42.687256   35943 retry.go:31] will retry after 1.166098452s: waiting for machine to come up
	I0930 11:30:43.855595   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:43.856179   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:43.856196   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:43.856132   35943 retry.go:31] will retry after 902.212274ms: waiting for machine to come up
	I0930 11:30:44.759588   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:44.760139   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:44.760169   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:44.760094   35943 retry.go:31] will retry after 1.732785907s: waiting for machine to come up
	I0930 11:30:46.495220   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:46.495722   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:46.495751   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:46.495670   35943 retry.go:31] will retry after 1.455893126s: waiting for machine to come up
	I0930 11:30:47.952835   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:47.953164   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:47.953186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:47.953117   35943 retry.go:31] will retry after 1.846394006s: waiting for machine to come up
	I0930 11:30:49.801836   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:49.802224   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:49.802255   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:49.802148   35943 retry.go:31] will retry after 3.334677314s: waiting for machine to come up
	I0930 11:30:53.140758   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:53.141162   34720 main.go:141] libmachine: (ha-033260-m02) DBG | unable to find current IP address of domain ha-033260-m02 in network mk-ha-033260
	I0930 11:30:53.141198   34720 main.go:141] libmachine: (ha-033260-m02) DBG | I0930 11:30:53.141142   35943 retry.go:31] will retry after 4.392553354s: waiting for machine to come up
	I0930 11:30:57.535667   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536094   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has current primary IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.536115   34720 main.go:141] libmachine: (ha-033260-m02) Found IP for machine: 192.168.39.3
	I0930 11:30:57.536128   34720 main.go:141] libmachine: (ha-033260-m02) Reserving static IP address...
	I0930 11:30:57.536667   34720 main.go:141] libmachine: (ha-033260-m02) Reserved static IP address: 192.168.39.3
	I0930 11:30:57.536690   34720 main.go:141] libmachine: (ha-033260-m02) Waiting for SSH to be available...
	I0930 11:30:57.536702   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.536717   34720 main.go:141] libmachine: (ha-033260-m02) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m02", mac: "52:54:00:31:8f:e1", ip: "192.168.39.3"}
	I0930 11:30:57.536733   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Getting to WaitForSSH function...
	I0930 11:30:57.538801   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539092   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.539118   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.539287   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH client type: external
	I0930 11:30:57.539307   34720 main.go:141] libmachine: (ha-033260-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa (-rw-------)
	I0930 11:30:57.539337   34720 main.go:141] libmachine: (ha-033260-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:30:57.539351   34720 main.go:141] libmachine: (ha-033260-m02) DBG | About to run SSH command:
	I0930 11:30:57.539361   34720 main.go:141] libmachine: (ha-033260-m02) DBG | exit 0
	I0930 11:30:57.665932   34720 main.go:141] libmachine: (ha-033260-m02) DBG | SSH cmd err, output: <nil>: 
	I0930 11:30:57.666273   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetConfigRaw
	I0930 11:30:57.666869   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:57.669186   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669581   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.669611   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.669933   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:30:57.670195   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:30:57.670214   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:57.670410   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.672489   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.672840   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.672867   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.673009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.673202   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673389   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.673514   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.673661   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.673838   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.673848   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:30:57.786110   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:30:57.786133   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786377   34720 buildroot.go:166] provisioning hostname "ha-033260-m02"
	I0930 11:30:57.786400   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:57.786574   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.789039   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789439   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.789465   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.789633   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.789794   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.789948   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.790053   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.790195   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.790374   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.790385   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m02 && echo "ha-033260-m02" | sudo tee /etc/hostname
	I0930 11:30:57.917415   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m02
	
	I0930 11:30:57.917438   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:57.920154   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920496   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:57.920529   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:57.920721   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:57.920892   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921046   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:57.921171   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:57.921311   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:57.921493   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:57.921509   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:30:58.045391   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:30:58.045417   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:30:58.045437   34720 buildroot.go:174] setting up certificates
	I0930 11:30:58.045462   34720 provision.go:84] configureAuth start
	I0930 11:30:58.045479   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetMachineName
	I0930 11:30:58.045758   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.048321   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048721   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.048743   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.048920   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.051229   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051564   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.051591   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.051758   34720 provision.go:143] copyHostCerts
	I0930 11:30:58.051783   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051822   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:30:58.051830   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:30:58.051885   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:30:58.051973   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.051994   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:30:58.051999   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:30:58.052023   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:30:58.052120   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052140   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:30:58.052144   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:30:58.052164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:30:58.052236   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m02 san=[127.0.0.1 192.168.39.3 ha-033260-m02 localhost minikube]
	I0930 11:30:58.137309   34720 provision.go:177] copyRemoteCerts
	I0930 11:30:58.137363   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:30:58.137388   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.139915   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140158   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.140185   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.140386   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.140552   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.140695   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.140798   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.228976   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:30:58.229076   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:30:58.254635   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:30:58.254717   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:30:58.279904   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:30:58.279982   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:30:58.305451   34720 provision.go:87] duration metric: took 259.975115ms to configureAuth
	I0930 11:30:58.305480   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:30:58.305758   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:30:58.305834   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.308335   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.308803   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.308825   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.309009   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.309198   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309332   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.309439   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.309633   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.309804   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.309818   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:30:58.549247   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:30:58.549271   34720 machine.go:96] duration metric: took 879.062425ms to provisionDockerMachine
	I0930 11:30:58.549282   34720 start.go:293] postStartSetup for "ha-033260-m02" (driver="kvm2")
	I0930 11:30:58.549291   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:30:58.549307   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.549711   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:30:58.549753   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.552476   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.552924   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.552952   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.553077   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.553265   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.553440   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.553591   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.641113   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:30:58.645683   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:30:58.645710   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:30:58.645780   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:30:58.645871   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:30:58.645881   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:30:58.645976   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:30:58.656118   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:30:58.683428   34720 start.go:296] duration metric: took 134.134961ms for postStartSetup
	I0930 11:30:58.683471   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.683772   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:30:58.683796   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.686150   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686552   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.686580   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.686712   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.686921   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.687033   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.687137   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.772957   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:30:58.773054   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:30:58.831207   34720 fix.go:56] duration metric: took 20.410809297s for fixHost
	I0930 11:30:58.831256   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.834153   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834531   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.834561   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.834754   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.834963   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835129   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.835280   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.835497   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:30:58.835715   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.3 22 <nil> <nil>}
	I0930 11:30:58.835747   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:30:58.950852   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695858.923209005
	
	I0930 11:30:58.950874   34720 fix.go:216] guest clock: 1727695858.923209005
	I0930 11:30:58.950882   34720 fix.go:229] Guest: 2024-09-30 11:30:58.923209005 +0000 UTC Remote: 2024-09-30 11:30:58.831234705 +0000 UTC m=+335.558365405 (delta=91.9743ms)
	I0930 11:30:58.950897   34720 fix.go:200] guest clock delta is within tolerance: 91.9743ms
	I0930 11:30:58.950902   34720 start.go:83] releasing machines lock for "ha-033260-m02", held for 20.530522823s
	I0930 11:30:58.950922   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.951203   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:30:58.954037   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.954470   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.954495   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.956428   34720 out.go:177] * Found network options:
	I0930 11:30:58.958147   34720 out.go:177]   - NO_PROXY=192.168.39.249
	W0930 11:30:58.959662   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.959685   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960216   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960383   34720 main.go:141] libmachine: (ha-033260-m02) Calling .DriverName
	I0930 11:30:58.960470   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:30:58.960516   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	W0930 11:30:58.960557   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:30:58.960638   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:30:58.960661   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHHostname
	I0930 11:30:58.963506   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963693   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.963874   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.963901   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964044   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964186   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964190   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:30:58.964217   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:30:58.964364   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHPort
	I0930 11:30:58.964379   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964505   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:58.964524   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHKeyPath
	I0930 11:30:58.964643   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetSSHUsername
	I0930 11:30:58.964756   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m02/id_rsa Username:docker}
	I0930 11:30:59.185932   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:30:59.192578   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:30:59.192645   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:30:59.212639   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:30:59.212663   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:30:59.212730   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:30:59.233596   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:30:59.248239   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:30:59.248310   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:30:59.262501   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:30:59.277031   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:30:59.408627   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:30:59.575087   34720 docker.go:233] disabling docker service ...
	I0930 11:30:59.575157   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:30:59.590510   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:30:59.605151   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:30:59.739478   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:30:59.876906   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:30:59.891632   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:30:59.911543   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:30:59.911601   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.923050   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:30:59.923114   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.934427   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.945682   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.957111   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:30:59.968813   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.980975   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:30:59.999767   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:31:00.011463   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:31:00.021740   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:31:00.021804   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:31:00.036575   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:31:00.046724   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:00.166031   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:31:00.263048   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:31:00.263104   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:31:00.268250   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:31:00.268319   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:31:00.272426   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:31:00.321494   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:31:00.321561   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.350506   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:31:00.381505   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:31:00.383057   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:31:00.384433   34720 main.go:141] libmachine: (ha-033260-m02) Calling .GetIP
	I0930 11:31:00.387430   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.387871   34720 main.go:141] libmachine: (ha-033260-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:8f:e1", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:50 +0000 UTC Type:0 Mac:52:54:00:31:8f:e1 Iaid: IPaddr:192.168.39.3 Prefix:24 Hostname:ha-033260-m02 Clientid:01:52:54:00:31:8f:e1}
	I0930 11:31:00.387903   34720 main.go:141] libmachine: (ha-033260-m02) DBG | domain ha-033260-m02 has defined IP address 192.168.39.3 and MAC address 52:54:00:31:8f:e1 in network mk-ha-033260
	I0930 11:31:00.388092   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:31:00.392819   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:00.406199   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:31:00.406474   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:00.406842   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.406891   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.421565   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0930 11:31:00.422022   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.422477   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.422501   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.422814   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.423031   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:31:00.424747   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:31:00.425025   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:00.425059   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:00.439760   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34553
	I0930 11:31:00.440237   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:00.440699   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:00.440716   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:00.441029   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:00.441215   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:31:00.441357   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.3
	I0930 11:31:00.441367   34720 certs.go:194] generating shared ca certs ...
	I0930 11:31:00.441380   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.441501   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:31:00.441541   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:31:00.441555   34720 certs.go:256] generating profile certs ...
	I0930 11:31:00.441653   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:31:00.441679   34720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173
	I0930 11:31:00.441696   34720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249 192.168.39.3 192.168.39.238 192.168.39.254]
	I0930 11:31:00.711479   34720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 ...
	I0930 11:31:00.711512   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173: {Name:mk8969b2efcc5de06d527c6abe25d7f8f8bfba86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711706   34720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 ...
	I0930 11:31:00.711723   34720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173: {Name:mkcb971c29eb187169c6672af3a12c14dd523134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:31:00.711815   34720 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt
	I0930 11:31:00.711977   34720 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.2551b173 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key
	I0930 11:31:00.712110   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:31:00.712126   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:31:00.712141   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:31:00.712175   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:31:00.712192   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:31:00.712204   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:31:00.712217   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:31:00.712228   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:31:00.712238   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:31:00.712287   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:31:00.712314   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:31:00.712324   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:31:00.712348   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:31:00.712369   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:31:00.712408   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:31:00.712446   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:31:00.712473   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:31:00.712487   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:00.712499   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:31:00.712528   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:31:00.715756   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716154   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:31:00.716181   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:31:00.716374   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:31:00.716558   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:31:00.716720   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:31:00.716893   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:31:00.794084   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:31:00.799675   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:31:00.812361   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:31:00.817141   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:31:00.828855   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:31:00.833566   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:31:00.844934   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:31:00.849462   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:31:00.860080   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:31:00.864183   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:31:00.875695   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:31:00.880202   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:31:00.891130   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:31:00.918693   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:31:00.944303   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:31:00.969526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:31:00.996710   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:31:01.023015   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:31:01.050381   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:31:01.076757   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:31:01.103526   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:31:01.129114   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:31:01.155177   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:31:01.180954   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:31:01.199391   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:31:01.218184   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:31:01.238266   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:31:01.258183   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:31:01.276632   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:31:01.294303   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:31:01.312244   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:31:01.318735   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:31:01.330839   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.335928   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.336000   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:31:01.342463   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:31:01.353941   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:31:01.365658   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370653   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.370714   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:31:01.376795   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:31:01.388155   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:31:01.399831   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404901   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.404967   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:31:01.411138   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:31:01.422294   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:31:01.426988   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:31:01.433816   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:31:01.440682   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:31:01.447200   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:31:01.454055   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:31:01.460508   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:31:01.466735   34720 kubeadm.go:934] updating node {m02 192.168.39.3 8443 v1.31.1 crio true true} ...
	I0930 11:31:01.466882   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:31:01.466926   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:31:01.466986   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:31:01.485425   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:31:01.485500   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:31:01.485555   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:31:01.495844   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:31:01.495903   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:31:01.505526   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0930 11:31:01.523077   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:31:01.540915   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:31:01.558204   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:31:01.562410   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:31:01.575484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.701502   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.719655   34720 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:31:01.719937   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:01.723162   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:31:01.724484   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:31:01.910906   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:31:01.933340   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:31:01.933718   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:31:01.933803   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:31:01.934081   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:01.934248   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:01.934259   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:01.934274   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:01.934285   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:06.735523   34720 round_trippers.go:574] Response Status:  in 4801 milliseconds
	I0930 11:31:07.735873   34720 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735937   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:07.735944   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:07.735954   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:07.735960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:17.737130   34720 round_trippers.go:574] Response Status:  in 10001 milliseconds
	I0930 11:31:17.737228   34720 node_ready.go:53] error getting node "ha-033260-m02": Get "https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.39.1:51024->192.168.39.249:8443: read: connection reset by peer
	I0930 11:31:17.737312   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:17.737324   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:17.737335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:17.737343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.500223   34720 round_trippers.go:574] Response Status: 200 OK in 3762 milliseconds
	I0930 11:31:21.501292   34720 node_ready.go:53] node "ha-033260-m02" has status "Ready":"Unknown"
	I0930 11:31:21.501373   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.501386   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.501397   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.501404   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.519310   34720 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0930 11:31:21.934926   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:21.934946   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:21.934956   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:21.934960   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:21.940164   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:22.434503   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.434527   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.434544   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.434553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.438661   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:22.934869   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:22.934914   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:22.934923   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:22.934927   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:22.937891   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.435280   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.435301   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.435309   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.435314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.441790   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.444141   34720 node_ready.go:49] node "ha-033260-m02" has status "Ready":"True"
	I0930 11:31:23.444180   34720 node_ready.go:38] duration metric: took 21.510052339s for node "ha-033260-m02" to be "Ready" ...
	I0930 11:31:23.444195   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:23.444252   34720 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:31:23.444273   34720 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:31:23.444364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:23.444380   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.444392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.444401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.454505   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:23.465935   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.466047   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:31:23.466061   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.466072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.466081   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.474857   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:23.475614   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.475635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.475647   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.475654   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.478510   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:23.479069   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.479097   34720 pod_ready.go:82] duration metric: took 13.131126ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479109   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.479186   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:31:23.479199   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.479208   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.479213   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.485985   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.486909   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.486931   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.486941   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.486947   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490284   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.490832   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.490853   34720 pod_ready.go:82] duration metric: took 11.73655ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490864   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.490951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:31:23.490962   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.490972   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.490980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.498681   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:23.499421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:23.499443   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.499460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.499466   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.503369   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:23.503948   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:23.503974   34720 pod_ready.go:82] duration metric: took 13.102363ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.503986   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:23.504068   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:23.504080   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.504090   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.504097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.510528   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:23.511092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:23.511107   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:23.511115   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:23.511122   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:23.515703   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:24.004536   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.004560   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.004580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.004588   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.008341   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.009009   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.009023   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.009030   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.009038   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.011924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:24.504942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:24.504982   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.504991   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.504996   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.508600   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:24.509408   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:24.509428   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:24.509437   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:24.509441   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:24.512140   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.005082   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.005104   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.005112   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.005115   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.008608   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:25.009145   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.009159   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.009166   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.009172   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.012052   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:25.505333   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:25.505422   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.505445   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.505470   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.544680   34720 round_trippers.go:574] Response Status: 200 OK in 39 milliseconds
	I0930 11:31:25.545744   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:25.545758   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:25.545766   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:25.545771   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:25.559955   34720 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0930 11:31:25.560548   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:26.004848   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.004869   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.004877   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.004881   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.008562   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.009380   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.009397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.009407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.009413   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.012491   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.504290   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:26.504315   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.504327   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.504335   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.508059   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:26.508795   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:26.508813   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:26.508823   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:26.508828   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:26.512273   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.004525   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.004546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.004555   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.004560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009158   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:27.009942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.009959   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.009967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.009970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.013093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.505035   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:27.505082   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.505093   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.505100   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.508864   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:27.509652   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:27.509670   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:27.509681   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:27.509687   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:27.512440   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:28.005011   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.005040   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.005051   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.005058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.013343   34720 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0930 11:31:28.014728   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.014745   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.014754   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.014758   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.036177   34720 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0930 11:31:28.037424   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:28.504206   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:28.504241   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.504249   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.504254   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.511361   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:28.512356   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:28.512373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:28.512383   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:28.512389   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:28.525172   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:31:29.005163   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.005184   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.005195   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.005200   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.010684   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.011486   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.011516   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.011528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.011535   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.017470   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:29.505132   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:29.505152   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.505162   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.505168   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.518955   34720 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0930 11:31:29.519584   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:29.519602   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:29.519612   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:29.519619   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:29.530475   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:30.004860   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.004881   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.004889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.004893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.008564   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.009192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.009207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.009215   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.009220   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.013399   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.504171   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:30.504195   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.504205   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.504210   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.507972   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:30.509257   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:30.509275   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:30.509283   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:30.509286   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:30.513975   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:30.514510   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:31.004737   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.004765   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.004775   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.004780   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010196   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:31.010880   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.010900   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.010912   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.010919   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.014567   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:31.504379   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:31.504397   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.504405   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.504409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.511899   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:31.513088   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:31.513111   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:31.513122   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:31.513128   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:31.516398   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.005079   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.005119   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.005131   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.005138   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.009300   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:32.010097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.010118   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.010130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.010137   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.013237   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.505168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:32.505192   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.505203   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.505209   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.509155   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:32.509935   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:32.509953   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:32.509960   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:32.509964   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:32.513296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:33.004767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.004802   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.004812   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.004818   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.009316   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:33.009983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.009997   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.010005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.010018   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.012955   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:33.013498   34720 pod_ready.go:103] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"False"
	I0930 11:31:33.504397   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:33.504432   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.504443   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.504450   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.620464   34720 round_trippers.go:574] Response Status: 200 OK in 115 milliseconds
	I0930 11:31:33.621445   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:33.621467   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:33.621479   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:33.621486   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:33.624318   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.004311   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:31:34.004332   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.004341   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.004346   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.008601   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.009530   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.009546   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.009553   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.009556   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.013047   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.013767   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.013788   34720 pod_ready.go:82] duration metric: took 10.509794387s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013800   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.013877   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:31:34.013888   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.013899   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.013908   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.021427   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:31:34.022374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.022393   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.022405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.022412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.026491   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.027124   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.027154   34720 pod_ready.go:82] duration metric: took 13.341195ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027184   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.027276   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:31:34.027289   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.027300   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.027306   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.031483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.032050   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.032064   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.032072   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.032075   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.035296   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.035760   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.035779   34720 pod_ready.go:82] duration metric: took 8.586877ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035787   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.035853   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:31:34.035863   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.035870   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.035874   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.040970   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.041904   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:34.041918   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.041926   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.041929   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.046986   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.047525   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.047542   34720 pod_ready.go:82] duration metric: took 11.747596ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047550   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.047603   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:31:34.047611   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.047617   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.047621   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.053430   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:34.054003   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:34.054018   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.054025   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.054029   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.056888   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:31:34.057338   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:34.057358   34720 pod_ready.go:82] duration metric: took 9.802193ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.057367   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:34.204770   34720 request.go:632] Waited for 147.330113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204839   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.204844   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.204851   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.204860   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.209352   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:34.404334   34720 request.go:632] Waited for 194.306843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404424   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.404431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.404441   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.404444   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.408185   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.605268   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:34.605293   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.605306   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.605311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.608441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:34.804521   34720 request.go:632] Waited for 195.318558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804587   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:34.804592   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:34.804600   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:34.804607   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:34.808658   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.058569   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.058598   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.058609   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.058614   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.062153   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.204479   34720 request.go:632] Waited for 141.249746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204567   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.204575   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.204586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.204594   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.209332   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:35.558083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:35.558103   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.558111   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.558116   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.562046   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:35.605131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:35.605167   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:35.605179   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:35.605184   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:35.616080   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:31:36.058179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:31:36.058207   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.058218   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.058236   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.062566   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:36.063353   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:36.063373   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.063384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.063390   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.066635   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.067352   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.067373   34720 pod_ready.go:82] duration metric: took 2.009999965s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.067382   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.204802   34720 request.go:632] Waited for 137.362306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204868   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:31:36.204890   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.204901   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.204907   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.208231   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.404396   34720 request.go:632] Waited for 195.331717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:36.404465   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.404473   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.404477   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.408489   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.409278   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.409299   34720 pod_ready.go:82] duration metric: took 341.910503ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.409308   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.604639   34720 request.go:632] Waited for 195.258772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604699   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:31:36.604706   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.604716   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.604721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.608453   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.804560   34720 request.go:632] Waited for 195.30805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804622   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:36.804635   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:36.804645   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:36.804651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:36.808127   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:36.808836   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:36.808857   34720 pod_ready.go:82] duration metric: took 399.543561ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:36.808867   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.004923   34720 request.go:632] Waited for 195.985958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004973   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:31:37.004978   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.004985   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.004989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.008223   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.205282   34720 request.go:632] Waited for 196.371879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205357   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:31:37.205362   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.205369   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.205374   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.208700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.209207   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.209239   34720 pod_ready.go:82] duration metric: took 400.365138ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.209250   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.405282   34720 request.go:632] Waited for 195.959121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405389   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:31:37.405398   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.405409   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.405429   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.409314   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.605347   34720 request.go:632] Waited for 195.282379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:37.605431   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.605450   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.605459   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.608764   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:37.609479   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:37.609498   34720 pod_ready.go:82] duration metric: took 400.240233ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.609507   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:37.804579   34720 request.go:632] Waited for 195.010584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804657   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:31:37.804664   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:37.804671   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:37.804675   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:37.808363   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.005248   34720 request.go:632] Waited for 196.304263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005314   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:38.005321   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.005330   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.005333   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.009635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:31:38.010535   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.010557   34720 pod_ready.go:82] duration metric: took 401.042919ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.010566   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.204595   34720 request.go:632] Waited for 193.96721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204665   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:31:38.204677   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.204689   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.204696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.208393   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.404559   34720 request.go:632] Waited for 195.429784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.404620   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.404641   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.404646   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.408057   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.408674   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.408694   34720 pod_ready.go:82] duration metric: took 398.12275ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.408703   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.605374   34720 request.go:632] Waited for 196.589593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:31:38.605437   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.605444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.605449   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.609411   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.804516   34720 request.go:632] Waited for 194.287587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:31:38.804579   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:38.804586   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:38.804589   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:38.808043   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:38.808604   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:38.808623   34720 pod_ready.go:82] duration metric: took 399.91394ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:38.808637   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.004815   34720 request.go:632] Waited for 196.10639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004881   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:31:39.004887   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.004895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.004900   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.008293   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.204330   34720 request.go:632] Waited for 195.292523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204402   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:31:39.204410   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.204419   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.204428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.208212   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.208803   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.208826   34720 pod_ready.go:82] duration metric: took 400.181261ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.208843   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.404860   34720 request.go:632] Waited for 195.933233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404913   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:31:39.404919   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.404926   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.404931   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.408874   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.604903   34720 request.go:632] Waited for 195.413864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604970   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:31:39.604975   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.604983   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.604987   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.608209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:39.608764   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:31:39.608784   34720 pod_ready.go:82] duration metric: took 399.933732ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:31:39.608794   34720 pod_ready.go:39] duration metric: took 16.164585673s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:31:39.608807   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:31:39.608855   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:31:39.626199   34720 api_server.go:72] duration metric: took 37.906495975s to wait for apiserver process to appear ...
	I0930 11:31:39.626228   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:31:39.626249   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:31:39.630779   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:31:39.630856   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:31:39.630864   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.630872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.630879   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.631851   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:31:39.631971   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:31:39.631987   34720 api_server.go:131] duration metric: took 5.751654ms to wait for apiserver health ...
	I0930 11:31:39.631994   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:31:39.805247   34720 request.go:632] Waited for 173.189912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805322   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:39.805328   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:39.805335   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:39.805339   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:39.811658   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:31:39.818704   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:31:39.818737   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818745   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:39.818751   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:39.818754   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:39.818758   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:39.818761   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:39.818766   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:39.818769   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:39.818772   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:39.818777   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:39.818781   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:39.818787   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:39.818792   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:39.818797   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:39.818803   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:39.818809   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:39.818814   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:39.818820   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:39.818828   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:39.818834   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:39.818840   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:39.818843   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:39.818846   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:39.818852   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:39.818855   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:39.818858   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:39.818864   34720 system_pods.go:74] duration metric: took 186.864889ms to wait for pod list to return data ...
	I0930 11:31:39.818873   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:31:40.005326   34720 request.go:632] Waited for 186.370068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:31:40.005389   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.005396   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.005401   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.009301   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.009537   34720 default_sa.go:45] found service account: "default"
	I0930 11:31:40.009555   34720 default_sa.go:55] duration metric: took 190.676192ms for default service account to be created ...
	I0930 11:31:40.009564   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:31:40.205063   34720 request.go:632] Waited for 195.430952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:31:40.205139   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.205147   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.205150   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.210696   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:31:40.219002   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:31:40.219052   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219065   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:31:40.219074   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:31:40.219081   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:31:40.219086   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:31:40.219092   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:31:40.219097   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:31:40.219103   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:31:40.219108   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:31:40.219115   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:31:40.219123   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:31:40.219130   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:31:40.219137   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:31:40.219145   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:31:40.219149   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:31:40.219155   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:31:40.219158   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:31:40.219162   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:31:40.219168   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:31:40.219171   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:31:40.219177   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:31:40.219181   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:31:40.219186   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:31:40.219190   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:31:40.219193   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:31:40.219196   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:31:40.219204   34720 system_pods.go:126] duration metric: took 209.632746ms to wait for k8s-apps to be running ...
	I0930 11:31:40.219213   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:31:40.219257   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:31:40.234570   34720 system_svc.go:56] duration metric: took 15.34883ms WaitForService to wait for kubelet
	I0930 11:31:40.234600   34720 kubeadm.go:582] duration metric: took 38.514901899s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:31:40.234618   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:31:40.405060   34720 request.go:632] Waited for 170.372351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405131   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:31:40.405138   34720 round_trippers.go:469] Request Headers:
	I0930 11:31:40.405146   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:31:40.405152   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:31:40.409008   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:31:40.411040   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411072   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411093   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411098   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411104   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411112   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411118   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:31:40.411123   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:31:40.411130   34720 node_conditions.go:105] duration metric: took 176.506295ms to run NodePressure ...
	I0930 11:31:40.411143   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:31:40.411178   34720 start.go:255] writing updated cluster config ...
	I0930 11:31:40.413535   34720 out.go:201] 
	I0930 11:31:40.415246   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:31:40.415334   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.417113   34720 out.go:177] * Starting "ha-033260-m03" control-plane node in "ha-033260" cluster
	I0930 11:31:40.418650   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:31:40.418678   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:31:40.418775   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:31:40.418789   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:31:40.418878   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:40.419069   34720 start.go:360] acquireMachinesLock for ha-033260-m03: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:31:40.419116   34720 start.go:364] duration metric: took 28.328µs to acquireMachinesLock for "ha-033260-m03"
	I0930 11:31:40.419128   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:31:40.419133   34720 fix.go:54] fixHost starting: m03
	I0930 11:31:40.419393   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:31:40.419421   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:31:40.434730   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44029
	I0930 11:31:40.435197   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:31:40.435685   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:31:40.435709   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:31:40.436046   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:31:40.436205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:40.436359   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetState
	I0930 11:31:40.437971   34720 fix.go:112] recreateIfNeeded on ha-033260-m03: state=Stopped err=<nil>
	I0930 11:31:40.437995   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	W0930 11:31:40.438139   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:31:40.440134   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m03" ...
	I0930 11:31:40.441557   34720 main.go:141] libmachine: (ha-033260-m03) Calling .Start
	I0930 11:31:40.441787   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring networks are active...
	I0930 11:31:40.442656   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network default is active
	I0930 11:31:40.442963   34720 main.go:141] libmachine: (ha-033260-m03) Ensuring network mk-ha-033260 is active
	I0930 11:31:40.443304   34720 main.go:141] libmachine: (ha-033260-m03) Getting domain xml...
	I0930 11:31:40.443900   34720 main.go:141] libmachine: (ha-033260-m03) Creating domain...
	I0930 11:31:41.716523   34720 main.go:141] libmachine: (ha-033260-m03) Waiting to get IP...
	I0930 11:31:41.717310   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.717755   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.717843   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.717745   36275 retry.go:31] will retry after 213.974657ms: waiting for machine to come up
	I0930 11:31:41.933006   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:41.933445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:41.933470   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:41.933400   36275 retry.go:31] will retry after 366.443935ms: waiting for machine to come up
	I0930 11:31:42.300826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.301240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.301268   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.301200   36275 retry.go:31] will retry after 298.736034ms: waiting for machine to come up
	I0930 11:31:42.601863   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:42.602344   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:42.602373   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:42.602300   36275 retry.go:31] will retry after 422.049065ms: waiting for machine to come up
	I0930 11:31:43.025989   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.026495   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.026518   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.026460   36275 retry.go:31] will retry after 501.182735ms: waiting for machine to come up
	I0930 11:31:43.529199   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:43.529601   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:43.529643   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:43.529556   36275 retry.go:31] will retry after 658.388185ms: waiting for machine to come up
	I0930 11:31:44.189982   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:44.190445   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:44.190485   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:44.190396   36275 retry.go:31] will retry after 869.323325ms: waiting for machine to come up
	I0930 11:31:45.061299   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:45.061826   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:45.061855   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:45.061762   36275 retry.go:31] will retry after 1.477543518s: waiting for machine to come up
	I0930 11:31:46.540654   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:46.541062   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:46.541088   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:46.541024   36275 retry.go:31] will retry after 1.217619831s: waiting for machine to come up
	I0930 11:31:47.760283   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:47.760670   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:47.760692   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:47.760626   36275 retry.go:31] will retry after 1.524149013s: waiting for machine to come up
	I0930 11:31:49.286693   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:49.287173   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:49.287205   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:49.287119   36275 retry.go:31] will retry after 2.547999807s: waiting for machine to come up
	I0930 11:31:51.836378   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:51.836878   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:51.836903   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:51.836847   36275 retry.go:31] will retry after 3.478582774s: waiting for machine to come up
	I0930 11:31:55.318753   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:55.319267   34720 main.go:141] libmachine: (ha-033260-m03) DBG | unable to find current IP address of domain ha-033260-m03 in network mk-ha-033260
	I0930 11:31:55.319288   34720 main.go:141] libmachine: (ha-033260-m03) DBG | I0930 11:31:55.319225   36275 retry.go:31] will retry after 4.232487143s: waiting for machine to come up
	I0930 11:31:59.554587   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555031   34720 main.go:141] libmachine: (ha-033260-m03) Found IP for machine: 192.168.39.238
	I0930 11:31:59.555054   34720 main.go:141] libmachine: (ha-033260-m03) Reserving static IP address...
	I0930 11:31:59.555067   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has current primary IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.555464   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.555482   34720 main.go:141] libmachine: (ha-033260-m03) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m03", mac: "52:54:00:f2:70:c8", ip: "192.168.39.238"}
	I0930 11:31:59.555498   34720 main.go:141] libmachine: (ha-033260-m03) Reserved static IP address: 192.168.39.238
	I0930 11:31:59.555507   34720 main.go:141] libmachine: (ha-033260-m03) Waiting for SSH to be available...
	I0930 11:31:59.555514   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Getting to WaitForSSH function...
	I0930 11:31:59.558171   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558619   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.558660   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.558780   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH client type: external
	I0930 11:31:59.558806   34720 main.go:141] libmachine: (ha-033260-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa (-rw-------)
	I0930 11:31:59.558840   34720 main.go:141] libmachine: (ha-033260-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:31:59.558849   34720 main.go:141] libmachine: (ha-033260-m03) DBG | About to run SSH command:
	I0930 11:31:59.558869   34720 main.go:141] libmachine: (ha-033260-m03) DBG | exit 0
	I0930 11:31:59.689497   34720 main.go:141] libmachine: (ha-033260-m03) DBG | SSH cmd err, output: <nil>: 
	I0930 11:31:59.689854   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetConfigRaw
	I0930 11:31:59.690426   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:31:59.692709   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693063   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.693096   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.693354   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:31:59.693555   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:31:59.693570   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:31:59.693768   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.695742   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696024   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.696050   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.696142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.696286   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696441   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.696600   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.696763   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.696989   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.697005   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:31:59.810353   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:31:59.810380   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810618   34720 buildroot.go:166] provisioning hostname "ha-033260-m03"
	I0930 11:31:59.810647   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:31:59.810829   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.813335   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813637   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.813661   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.813848   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.814001   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.814334   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.814507   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.814661   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.814672   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m03 && echo "ha-033260-m03" | sudo tee /etc/hostname
	I0930 11:31:59.949653   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m03
	
	I0930 11:31:59.949686   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:31:59.952597   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.952969   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:31:59.952992   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:31:59.953242   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:31:59.953469   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953637   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:31:59.953759   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:31:59.953884   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:31:59.954062   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:31:59.954084   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:00.079890   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:00.079918   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:00.079939   34720 buildroot.go:174] setting up certificates
	I0930 11:32:00.079950   34720 provision.go:84] configureAuth start
	I0930 11:32:00.079961   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetMachineName
	I0930 11:32:00.080205   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:00.082895   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083281   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.083307   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.083437   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.085443   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085756   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.085776   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.085897   34720 provision.go:143] copyHostCerts
	I0930 11:32:00.085925   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.085978   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:00.085987   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:00.086050   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:00.086121   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086137   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:00.086142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:00.086164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:00.086219   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086243   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:00.086252   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:00.086288   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:00.086360   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m03 san=[127.0.0.1 192.168.39.238 ha-033260-m03 localhost minikube]
	I0930 11:32:00.252602   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:00.252654   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:00.252676   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.255361   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255706   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.255731   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.255860   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.255996   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.256131   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.256249   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.345059   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:00.345126   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:00.370752   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:00.370827   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:32:00.397640   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:00.397703   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:00.424094   34720 provision.go:87] duration metric: took 344.128805ms to configureAuth
	I0930 11:32:00.424128   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:00.424360   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:00.424480   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.427139   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427536   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.427563   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.427770   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.427949   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428043   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.428125   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.428217   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.428408   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.428424   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:00.687881   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:00.687919   34720 machine.go:96] duration metric: took 994.35116ms to provisionDockerMachine
	I0930 11:32:00.687935   34720 start.go:293] postStartSetup for "ha-033260-m03" (driver="kvm2")
	I0930 11:32:00.687950   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:00.687976   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.688322   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:00.688349   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.691216   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691735   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.691763   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.691959   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.692185   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.692344   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.692469   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.781946   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:00.786396   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:00.786417   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:00.786494   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:00.786562   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:00.786571   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:00.786646   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:00.796771   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:00.822239   34720 start.go:296] duration metric: took 134.285857ms for postStartSetup
	I0930 11:32:00.822297   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:00.822594   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:00.822622   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.825375   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825743   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.825764   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.825954   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.826142   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.826331   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.826492   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:00.912681   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:00.912751   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:00.970261   34720 fix.go:56] duration metric: took 20.551120789s for fixHost
	I0930 11:32:00.970311   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:00.973284   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973694   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:00.973722   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:00.973873   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:00.974035   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974161   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:00.974267   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:00.974426   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:00.974622   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0930 11:32:00.974633   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:01.099052   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695921.066520843
	
	I0930 11:32:01.099078   34720 fix.go:216] guest clock: 1727695921.066520843
	I0930 11:32:01.099089   34720 fix.go:229] Guest: 2024-09-30 11:32:01.066520843 +0000 UTC Remote: 2024-09-30 11:32:00.970290394 +0000 UTC m=+397.697421093 (delta=96.230449ms)
	I0930 11:32:01.099110   34720 fix.go:200] guest clock delta is within tolerance: 96.230449ms
	I0930 11:32:01.099117   34720 start.go:83] releasing machines lock for "ha-033260-m03", held for 20.679993634s
	I0930 11:32:01.099137   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.099384   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:01.102141   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.102593   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.102620   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.104827   34720 out.go:177] * Found network options:
	I0930 11:32:01.106181   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3
	W0930 11:32:01.107308   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.107329   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.107343   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.107885   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108079   34720 main.go:141] libmachine: (ha-033260-m03) Calling .DriverName
	I0930 11:32:01.108167   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:01.108222   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	W0930 11:32:01.108292   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:01.108316   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:01.108408   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:01.108430   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHHostname
	I0930 11:32:01.111240   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111542   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111663   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111698   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.111858   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.111861   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:01.111893   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:01.112028   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHPort
	I0930 11:32:01.112064   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112182   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112189   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHKeyPath
	I0930 11:32:01.112347   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.112360   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetSSHUsername
	I0930 11:32:01.112529   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m03/id_rsa Username:docker}
	I0930 11:32:01.339136   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:01.345573   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:01.345659   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:01.362608   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:01.362630   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:01.362686   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:01.381024   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:01.396259   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:01.396333   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:01.412406   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:01.429258   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:01.562463   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:01.730591   34720 docker.go:233] disabling docker service ...
	I0930 11:32:01.730664   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:01.755797   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:01.769489   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:01.890988   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:02.019465   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:02.036168   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:02.059913   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:02.059981   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.072160   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:02.072247   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.084599   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.096290   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.108573   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:02.120977   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.132246   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.150591   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:02.162524   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:02.173575   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:02.173660   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:02.188268   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:02.199979   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:02.326960   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:02.439885   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:02.439960   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:02.446734   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:02.446849   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:02.451344   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:02.492029   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:02.492116   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.521734   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:02.556068   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:02.557555   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:02.558901   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:02.560920   34720 main.go:141] libmachine: (ha-033260-m03) Calling .GetIP
	I0930 11:32:02.563759   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564191   34720 main.go:141] libmachine: (ha-033260-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:70:c8", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:31:51 +0000 UTC Type:0 Mac:52:54:00:f2:70:c8 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-033260-m03 Clientid:01:52:54:00:f2:70:c8}
	I0930 11:32:02.564218   34720 main.go:141] libmachine: (ha-033260-m03) DBG | domain ha-033260-m03 has defined IP address 192.168.39.238 and MAC address 52:54:00:f2:70:c8 in network mk-ha-033260
	I0930 11:32:02.564482   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:02.569571   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:02.585245   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:02.585463   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:02.585746   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.585790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.617422   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0930 11:32:02.617831   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.618295   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.618314   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.618694   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.618907   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:02.621016   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:02.621337   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:02.621378   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:02.636969   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46463
	I0930 11:32:02.637538   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:02.638051   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:02.638068   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:02.638431   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:02.638769   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:02.639005   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.238
	I0930 11:32:02.639018   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:02.639031   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:02.639158   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:02.639204   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:02.639213   34720 certs.go:256] generating profile certs ...
	I0930 11:32:02.639277   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key
	I0930 11:32:02.639334   34720 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key.93938a37
	I0930 11:32:02.639369   34720 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key
	I0930 11:32:02.639382   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:02.639398   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:02.639410   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:02.639423   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:02.639436   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:32:02.639451   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:32:02.639464   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:32:02.639477   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:32:02.639526   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:02.639556   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:02.639565   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:02.639587   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:02.639609   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:02.639654   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:02.639691   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:02.639715   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:02.639728   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:02.639740   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:02.639764   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHHostname
	I0930 11:32:02.643357   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.643807   34720 main.go:141] libmachine: (ha-033260) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:2e:07", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:30:24 +0000 UTC Type:0 Mac:52:54:00:b0:2e:07 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-033260 Clientid:01:52:54:00:b0:2e:07}
	I0930 11:32:02.643839   34720 main.go:141] libmachine: (ha-033260) DBG | domain ha-033260 has defined IP address 192.168.39.249 and MAC address 52:54:00:b0:2e:07 in network mk-ha-033260
	I0930 11:32:02.644023   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHPort
	I0930 11:32:02.644227   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHKeyPath
	I0930 11:32:02.644414   34720 main.go:141] libmachine: (ha-033260) Calling .GetSSHUsername
	I0930 11:32:02.644553   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260/id_rsa Username:docker}
	I0930 11:32:02.726043   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0930 11:32:02.732664   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0930 11:32:02.744611   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0930 11:32:02.750045   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0930 11:32:02.763417   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0930 11:32:02.768220   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0930 11:32:02.780605   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0930 11:32:02.786158   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0930 11:32:02.802503   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0930 11:32:02.809377   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0930 11:32:02.821900   34720 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0930 11:32:02.827740   34720 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0930 11:32:02.842110   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:02.872987   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:02.903102   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:02.932917   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:02.966742   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0930 11:32:02.995977   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:32:03.025802   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:32:03.057227   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:32:03.085425   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:03.115042   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:03.142328   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:03.168248   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0930 11:32:03.189265   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0930 11:32:03.208719   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0930 11:32:03.227953   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0930 11:32:03.248805   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0930 11:32:03.268786   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0930 11:32:03.288511   34720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0930 11:32:03.309413   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:03.315862   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:03.328610   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333839   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.333909   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:03.340595   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:03.353343   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:03.364689   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369580   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.369669   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:03.376067   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:03.388290   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:03.400003   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405168   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.405235   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:03.411812   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:03.424569   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:03.429588   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:32:03.436748   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:32:03.443675   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:32:03.450618   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:32:03.457889   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:32:03.464815   34720 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:32:03.471778   34720 kubeadm.go:934] updating node {m03 192.168.39.238 8443 v1.31.1 crio true true} ...
	I0930 11:32:03.471887   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:03.471924   34720 kube-vip.go:115] generating kube-vip config ...
	I0930 11:32:03.471974   34720 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0930 11:32:03.490629   34720 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0930 11:32:03.490701   34720 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0930 11:32:03.490761   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:03.502695   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:03.502771   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0930 11:32:03.514300   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:03.532840   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:03.552583   34720 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0930 11:32:03.570717   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:03.574725   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:03.588635   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.736031   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.755347   34720 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:32:03.755606   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:03.757343   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:03.758664   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:03.930799   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:03.947764   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:03.948004   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:03.948058   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:03.948281   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.948378   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:03.948390   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.948398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.948408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.951644   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.952631   34720 node_ready.go:49] node "ha-033260-m03" has status "Ready":"True"
	I0930 11:32:03.952655   34720 node_ready.go:38] duration metric: took 4.354654ms for node "ha-033260-m03" to be "Ready" ...
	I0930 11:32:03.952666   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:03.952741   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:03.952751   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.952758   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.952763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.959043   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:03.966223   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:03.966318   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:03.966326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.966334   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.966341   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.969582   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:03.970409   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:03.970425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:03.970433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:03.970436   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:03.973995   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.466604   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.466626   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.466634   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.466638   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470209   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.470966   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.470982   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.470989   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.470994   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.473518   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:04.966613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:04.966634   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.966642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.966647   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.970295   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:04.971225   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:04.971247   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:04.971256   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:04.971267   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:04.974506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.466575   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.466597   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.466605   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.466609   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.471476   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.472347   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.472369   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.472379   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.472385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.476605   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.966462   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:05.966484   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.966495   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.966499   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.970347   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:05.971438   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:05.971455   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:05.971465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:05.971469   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:05.975635   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:05.976454   34720 pod_ready.go:103] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:06.466781   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.466807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.466818   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.466825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.470300   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.471083   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.471100   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.471108   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.471111   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.474455   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:06.966864   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:06.966887   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.966895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.966899   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.970946   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:06.971993   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:06.972007   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:06.972014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:06.972021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:06.975563   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.466626   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.466651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.466664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.466671   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.471030   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:07.471751   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.471767   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.471775   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.471780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.475078   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.966446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:07.966464   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.966472   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.966476   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.970130   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:07.970892   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:07.970907   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:07.970916   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:07.970921   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:07.974558   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.467355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:08.467382   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.467392   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.467398   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.491602   34720 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0930 11:32:08.492458   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.492478   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.492488   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.492494   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.504709   34720 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0930 11:32:08.505926   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.505961   34720 pod_ready.go:82] duration metric: took 4.539705143s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.505976   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.506053   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:08.506070   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.506079   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.506091   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.513015   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:08.514472   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.514492   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.514500   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.514504   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.522097   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:08.522597   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.522620   34720 pod_ready.go:82] duration metric: took 16.634648ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522632   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.522710   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:08.522720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.522730   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.522736   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.528114   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:08.529205   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:08.529222   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.529239   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.529245   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.532511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.533059   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.533085   34720 pod_ready.go:82] duration metric: took 10.444686ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533097   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.533168   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:08.533175   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.533185   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.533194   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.536360   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.537030   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:08.537046   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.537054   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.537058   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.540241   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.540684   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:08.540702   34720 pod_ready.go:82] duration metric: took 7.598243ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540712   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:08.540774   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:08.540782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.540789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.540794   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.544599   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:08.545135   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:08.545150   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:08.545158   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:08.545161   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:08.548627   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.041691   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.041715   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.041724   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.041728   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.045686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.046390   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.046409   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.046420   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.046428   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.050351   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.541239   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:09.541273   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.541285   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.541291   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.544605   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:09.545287   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:09.545303   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:09.545311   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:09.545314   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:09.548353   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.041331   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.041356   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.041368   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.041373   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.045200   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.046010   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.046031   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.046039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.046046   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.049179   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.541488   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:10.541515   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.541528   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.541536   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.545641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:10.546377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:10.546400   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:10.546407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:10.546410   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:10.549732   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:10.550616   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:11.040952   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.040974   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.040982   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.040989   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.046528   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:11.047555   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.047571   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.047581   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.047586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.051499   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:11.541109   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:11.541139   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.541149   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.541154   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.545483   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:11.546103   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:11.546119   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:11.546130   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:11.546136   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:11.549272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:12.041130   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.041165   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.041176   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.041182   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.045465   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.046261   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.046277   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.046284   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.046289   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.054233   34720 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0930 11:32:12.540971   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:12.540992   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.541000   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.541004   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.545075   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:12.545773   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:12.545789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:12.545799   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:12.545805   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:12.549003   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.041785   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.041807   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.041817   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.041823   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.045506   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.046197   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.046214   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.046221   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.046241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.048544   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:13.048911   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:13.541700   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:13.541728   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.541740   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.541748   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.545726   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:13.546727   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:13.546742   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:13.546749   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:13.546753   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:13.549687   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:14.041571   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.041593   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.041601   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.041605   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.045629   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.047164   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.047185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.047199   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.047203   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.052005   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:14.541017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:14.541043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.541055   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.541060   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.545027   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:14.546245   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:14.546266   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:14.546275   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:14.546280   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:14.549572   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.041446   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.041468   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.041477   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.041481   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.045111   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.045983   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.046004   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.046014   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.046021   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.055916   34720 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0930 11:32:15.056489   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:15.541417   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:15.541448   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.541460   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.541465   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.544952   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:15.545764   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:15.545781   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:15.545790   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:15.545795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:15.552050   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:16.040979   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.041003   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.041011   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.041016   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.045765   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:16.046411   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.046427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.046435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.046439   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.056745   34720 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0930 11:32:16.541660   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:16.541682   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.541692   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.541696   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.545213   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:16.546092   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:16.546110   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:16.546121   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:16.546126   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:16.548900   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.041375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.041399   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.041411   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.041417   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.045641   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:17.046588   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.046611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.046621   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.046628   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.049632   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.541651   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:17.541676   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.541686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.541692   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.545407   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:17.546246   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:17.546269   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:17.546282   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:17.546290   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:17.549117   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:17.549778   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:18.041518   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.041556   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.041568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.041576   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:18.046748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.046769   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.046780   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.046787   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.052283   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:18.541399   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:18.541425   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.541433   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.541437   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.545011   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:18.546056   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:18.546078   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:18.546089   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:18.546097   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:18.549203   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.041166   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.041201   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.041210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.041214   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.045755   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.046481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.046500   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.046510   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.046517   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.049924   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:19.541836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:19.541873   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.541885   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.541893   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.546183   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.547097   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:19.547116   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:19.547126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:19.547130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:19.551235   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:19.551688   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:20.041000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.041027   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.041039   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.041053   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.045149   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.045912   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.045934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.045945   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.045950   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.049525   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:20.541792   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:20.541813   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.541821   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.541825   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.546083   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:20.546947   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:20.546969   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:20.546980   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:20.546988   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:20.551303   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:21.041910   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.041938   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.041950   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.041955   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.047824   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:21.048523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.048544   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.048555   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.048560   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.051690   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.541671   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:21.541695   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.541707   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.541714   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.545187   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:21.545925   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:21.545943   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:21.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:21.545957   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:21.549146   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.040908   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.040934   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.040944   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.040949   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.044322   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.045253   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.045275   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.045286   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.045311   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.048540   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:22.049217   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:22.541377   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:22.541397   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.541405   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.596027   34720 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0930 11:32:22.596840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:22.596858   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:22.596868   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:22.596876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:22.600101   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.041796   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.041817   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.041826   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.041830   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.046144   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:23.047374   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.047396   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.047407   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.047412   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.051210   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.541365   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:23.541391   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.541403   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.541408   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.544624   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:23.545332   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:23.545348   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:23.545356   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:23.545362   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:23.548076   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.040942   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.040985   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.040995   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.040999   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.044909   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.045625   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.045642   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.045653   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.045658   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.048446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:24.541477   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:24.541497   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.541506   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.541509   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.545585   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:24.546447   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:24.546460   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:24.546468   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:24.546472   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:24.549497   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:24.550184   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:25.041599   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.041635   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.041645   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.041651   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.048106   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:25.048975   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.048998   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.049008   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.049013   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.054165   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:25.541178   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:25.541223   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.541235   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.541241   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.545143   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:25.545923   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:25.545941   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:25.545953   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:25.545962   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:25.549975   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.041161   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.041185   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.041193   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.041199   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.045231   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:26.046025   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.046042   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.046049   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.046055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.048864   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:26.541487   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:26.541511   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.541521   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.541528   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.548114   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:26.548980   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:26.548993   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:26.549001   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:26.549005   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:26.552757   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:26.553360   34720 pod_ready.go:103] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:27.041590   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.041611   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.041636   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.041639   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.046112   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:27.047076   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.047092   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.047100   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.047104   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.052347   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:27.541767   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:27.541789   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.541797   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.541801   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.545090   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:27.545664   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:27.545678   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:27.545686   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:27.545690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:27.548839   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.041179   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.041200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.041212   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.041217   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.046396   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:28.047355   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.047372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.047384   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.047388   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.053891   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:28.541237   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:28.541259   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.541268   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.541271   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545192   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.545941   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.545959   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.545967   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.545970   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.549204   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.550435   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.550457   34720 pod_ready.go:82] duration metric: took 20.009736872s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.550559   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:32:28.550570   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.550580   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.550590   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.553686   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.554394   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:28.554407   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.554414   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.554420   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.556924   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.557578   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.557600   34720 pod_ready.go:82] duration metric: took 7.108562ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557612   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.557692   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:32:28.557702   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.557712   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.557722   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.560446   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.561014   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:28.561029   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.561036   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.561040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.563867   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:28.564450   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:28.564468   34720 pod_ready.go:82] duration metric: took 6.836659ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564483   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:28.564558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:28.564568   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.564578   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.564586   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.567937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:28.568639   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:28.568653   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:28.568661   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:28.568664   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:28.571277   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:29.065431   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.065458   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.065466   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.065469   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.069406   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.070004   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.070020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.070028   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.070033   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.073076   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.565018   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:29.565043   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.565052   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.565055   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.568350   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:29.569071   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:29.569090   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:29.569101   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:29.569107   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:29.572794   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.065688   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.065710   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.065717   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.065721   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.069593   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.070370   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.070385   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.070393   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.070397   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.073099   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.565351   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:30.565372   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.565380   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.565385   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.568480   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:30.569460   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:30.569481   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:30.569489   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:30.569493   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:30.572043   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:30.572542   34720 pod_ready.go:103] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"False"
	I0930 11:32:31.064934   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:32:31.064954   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.064963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.064967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.069154   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:31.070615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.070631   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.070642   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.070648   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.073638   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.074233   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.074258   34720 pod_ready.go:82] duration metric: took 2.50976614s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074273   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.074364   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:32:31.074392   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.074418   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.074427   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.077429   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.078309   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:31.078326   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.078336   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.078343   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.080937   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.081321   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.081341   34720 pod_ready.go:82] duration metric: took 7.059128ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081353   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.081418   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:32:31.081428   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.081438   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.081447   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.084351   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.084930   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:31.084944   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.084951   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.084956   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.087905   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:31.088473   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:31.088493   34720 pod_ready.go:82] duration metric: took 7.129947ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.088504   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:31.141826   34720 request.go:632] Waited for 53.255293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141907   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.141915   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.141924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.141929   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.145412   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.341415   34720 request.go:632] Waited for 195.313156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341481   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.341506   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.341520   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.341524   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.344937   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.589605   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:31.589637   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.589646   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.589651   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.593330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:31.741775   34720 request.go:632] Waited for 147.33103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741840   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:31.741847   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:31.741857   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:31.741869   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:31.745796   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.089735   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.089761   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.089772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.089776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.093492   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.141705   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.141744   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.141752   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.141757   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.145662   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.589384   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:32.589408   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.589418   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.589426   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.592976   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:32.593954   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:32.593971   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:32.593979   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:32.593983   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:32.597157   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.089690   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:32:33.089720   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.089733   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.089743   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.094817   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:33.095412   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:33.095427   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.095435   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.095442   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.098967   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.099551   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.099569   34720 pod_ready.go:82] duration metric: took 2.011056626s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.099580   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.141920   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:32:33.141953   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.141961   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.141965   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.146176   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:33.342278   34720 request.go:632] Waited for 195.329061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342343   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:33.342351   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.342362   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.342368   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.346051   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.346626   34720 pod_ready.go:98] node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346650   34720 pod_ready.go:82] duration metric: took 247.062207ms for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	E0930 11:32:33.346662   34720 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-033260-m04" hosting pod "kube-proxy-cr58q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-033260-m04" has status "Ready":"Unknown"
	I0930 11:32:33.346673   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.541732   34720 request.go:632] Waited for 194.984853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541823   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:32:33.541832   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.541839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.541846   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.545738   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.741681   34720 request.go:632] Waited for 195.307104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741746   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:33.741753   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.741839   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.741853   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.745711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:33.746422   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:33.746442   34720 pod_ready.go:82] duration metric: took 399.762428ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.746454   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:33.941491   34720 request.go:632] Waited for 194.974915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941558   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:32:33.941575   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:33.941582   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:33.941585   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:33.945250   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.142081   34720 request.go:632] Waited for 196.05781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142187   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:34.142199   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.142207   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.142211   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.146079   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.146737   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.146756   34720 pod_ready.go:82] duration metric: took 400.295141ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.146770   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.342040   34720 request.go:632] Waited for 195.196365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342146   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:32:34.342159   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.342171   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.342181   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.345711   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.541794   34720 request.go:632] Waited for 195.310617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541870   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.541876   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.541884   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.541889   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.545585   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.546141   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.546158   34720 pod_ready.go:82] duration metric: took 399.379827ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.546174   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.742192   34720 request.go:632] Waited for 195.896441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742266   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:32:34.742272   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.742279   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.742283   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.745382   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.941671   34720 request.go:632] Waited for 195.443927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941750   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:34.941755   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:34.941763   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:34.941767   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:34.945425   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:34.946182   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:34.946207   34720 pod_ready.go:82] duration metric: took 400.022007ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:34.946220   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.142264   34720 request.go:632] Waited for 195.977294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142349   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:32:35.142355   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.142363   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.142372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.146093   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.342119   34720 request.go:632] Waited for 195.354718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342174   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:35.342179   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.342185   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.342189   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.345678   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.346226   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.346244   34720 pod_ready.go:82] duration metric: took 400.013115ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.346253   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.541907   34720 request.go:632] Waited for 195.545182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541986   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:32:35.541995   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.542006   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.542018   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.545604   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.741571   34720 request.go:632] Waited for 195.370489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741659   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:35.741667   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.741678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.741690   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.745574   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:35.746159   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:35.746179   34720 pod_ready.go:82] duration metric: took 399.919057ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:35.746193   34720 pod_ready.go:39] duration metric: took 31.793515417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:35.746211   34720 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:32:35.746295   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:32:35.770439   34720 api_server.go:72] duration metric: took 32.015036347s to wait for apiserver process to appear ...
	I0930 11:32:35.770467   34720 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:32:35.770491   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0930 11:32:35.775724   34720 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0930 11:32:35.775811   34720 round_trippers.go:463] GET https://192.168.39.249:8443/version
	I0930 11:32:35.775820   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.775829   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.775838   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.776730   34720 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0930 11:32:35.776791   34720 api_server.go:141] control plane version: v1.31.1
	I0930 11:32:35.776806   34720 api_server.go:131] duration metric: took 6.332786ms to wait for apiserver health ...
	I0930 11:32:35.776814   34720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:32:35.942219   34720 request.go:632] Waited for 165.338166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942284   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:35.942290   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:35.942302   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:35.942308   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:35.948613   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:35.956880   34720 system_pods.go:59] 26 kube-system pods found
	I0930 11:32:35.956918   34720 system_pods.go:61] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:35.956927   34720 system_pods.go:61] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:35.956932   34720 system_pods.go:61] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:35.956938   34720 system_pods.go:61] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:35.956942   34720 system_pods.go:61] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:35.956947   34720 system_pods.go:61] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:35.956951   34720 system_pods.go:61] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:35.956956   34720 system_pods.go:61] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:35.956960   34720 system_pods.go:61] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:35.956965   34720 system_pods.go:61] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:35.956971   34720 system_pods.go:61] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:35.956977   34720 system_pods.go:61] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:35.956988   34720 system_pods.go:61] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:35.956996   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:35.957001   34720 system_pods.go:61] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:35.957009   34720 system_pods.go:61] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:35.957014   34720 system_pods.go:61] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:35.957019   34720 system_pods.go:61] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:35.957027   34720 system_pods.go:61] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:35.957033   34720 system_pods.go:61] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:35.957041   34720 system_pods.go:61] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:35.957046   34720 system_pods.go:61] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:35.957053   34720 system_pods.go:61] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:35.957058   34720 system_pods.go:61] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:35.957066   34720 system_pods.go:61] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:35.957070   34720 system_pods.go:61] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:35.957081   34720 system_pods.go:74] duration metric: took 180.260558ms to wait for pod list to return data ...
	I0930 11:32:35.957093   34720 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:32:36.141557   34720 request.go:632] Waited for 184.369505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141646   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/default/serviceaccounts
	I0930 11:32:36.141655   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.141664   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.141669   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.146009   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.146146   34720 default_sa.go:45] found service account: "default"
	I0930 11:32:36.146163   34720 default_sa.go:55] duration metric: took 189.061389ms for default service account to be created ...
	I0930 11:32:36.146176   34720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:32:36.341683   34720 request.go:632] Waited for 195.43917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341772   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:36.341782   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.341789   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.341795   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.348026   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:36.355936   34720 system_pods.go:86] 26 kube-system pods found
	I0930 11:32:36.355974   34720 system_pods.go:89] "coredns-7c65d6cfc9-5frmm" [7333717d-95d5-4990-bac9-8443a51eee97] Running
	I0930 11:32:36.355980   34720 system_pods.go:89] "coredns-7c65d6cfc9-kt87v" [26f75c31-d44d-4a4c-8048-b6ce5c824151] Running
	I0930 11:32:36.355985   34720 system_pods.go:89] "etcd-ha-033260" [d56638a1-acd7-4dec-ad8f-12df96339db5] Running
	I0930 11:32:36.355989   34720 system_pods.go:89] "etcd-ha-033260-m02" [8f01e472-f8ae-4ef6-8f1b-a318d77140e2] Running
	I0930 11:32:36.355993   34720 system_pods.go:89] "etcd-ha-033260-m03" [69f89dd9-3421-46a4-a577-4d9fc1dc2f40] Running
	I0930 11:32:36.355997   34720 system_pods.go:89] "kindnet-4rpgw" [e9fd1809-f010-4725-ad29-7c7b4978a70f] Running
	I0930 11:32:36.356000   34720 system_pods.go:89] "kindnet-752cm" [af6a4971-6c03-4800-8c93-72937bc9d2bd] Running
	I0930 11:32:36.356003   34720 system_pods.go:89] "kindnet-g94k6" [260e385d-9e17-4af8-a854-8683afb714c4] Running
	I0930 11:32:36.356007   34720 system_pods.go:89] "kindnet-kb2cp" [c071322f-794b-4d6f-a33a-92077352ef5d] Running
	I0930 11:32:36.356011   34720 system_pods.go:89] "kube-apiserver-ha-033260" [bf23a120-d2fd-4446-9965-935832ad0587] Running
	I0930 11:32:36.356015   34720 system_pods.go:89] "kube-apiserver-ha-033260-m02" [cdc17419-96df-4112-8926-42e589cb7da5] Running
	I0930 11:32:36.356019   34720 system_pods.go:89] "kube-apiserver-ha-033260-m03" [24f2e5a6-8ccd-41a4-9881-b3326db38a78] Running
	I0930 11:32:36.356022   34720 system_pods.go:89] "kube-controller-manager-ha-033260" [0f751e4d-0adf-4425-81e9-723edcff472c] Running
	I0930 11:32:36.356025   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m02" [0c485755-8aae-4287-b7f0-f4f51c5ffc29] Running
	I0930 11:32:36.356028   34720 system_pods.go:89] "kube-controller-manager-ha-033260-m03" [aff121d2-41f1-4a58-8d41-09ee2759e183] Running
	I0930 11:32:36.356031   34720 system_pods.go:89] "kube-proxy-cr58q" [b2de7434-03f1-4bbc-ab62-3101483908c1] Running
	I0930 11:32:36.356034   34720 system_pods.go:89] "kube-proxy-fckwn" [784e8104-d913-476f-b1c3-9fb91d7e4c88] Running
	I0930 11:32:36.356037   34720 system_pods.go:89] "kube-proxy-fctld" [6ebb84e4-ea77-42d4-8237-30564d82cc03] Running
	I0930 11:32:36.356041   34720 system_pods.go:89] "kube-proxy-mxvxr" [314da0b5-6242-4af0-8e99-d0aaba82a43e] Running
	I0930 11:32:36.356044   34720 system_pods.go:89] "kube-scheduler-ha-033260" [37cc3312-1d25-4ee0-b6d2-1e0dbfdd4e16] Running
	I0930 11:32:36.356050   34720 system_pods.go:89] "kube-scheduler-ha-033260-m02" [0518eff4-61a1-4d11-9544-179c5e77b655] Running
	I0930 11:32:36.356053   34720 system_pods.go:89] "kube-scheduler-ha-033260-m03" [30086d5f-5b9b-4e93-bc1e-dee878d5ec71] Running
	I0930 11:32:36.356059   34720 system_pods.go:89] "kube-vip-ha-033260" [143642d3-d8cb-4ce3-a1c7-8aa0d624208d] Running
	I0930 11:32:36.356062   34720 system_pods.go:89] "kube-vip-ha-033260-m02" [6183d51d-d0b5-456a-9e46-abc6f30dd012] Running
	I0930 11:32:36.356065   34720 system_pods.go:89] "kube-vip-ha-033260-m03" [043e0bff-9099-41a1-98e5-d0f6c47a853d] Running
	I0930 11:32:36.356068   34720 system_pods.go:89] "storage-provisioner" [964381ab-f2ac-4361-a7e0-5212fff5e26e] Running
	I0930 11:32:36.356075   34720 system_pods.go:126] duration metric: took 209.893533ms to wait for k8s-apps to be running ...
	I0930 11:32:36.356084   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:32:36.356128   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:32:36.376905   34720 system_svc.go:56] duration metric: took 20.807413ms WaitForService to wait for kubelet
	I0930 11:32:36.376934   34720 kubeadm.go:582] duration metric: took 32.621540674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:32:36.376952   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:32:36.541278   34720 request.go:632] Waited for 164.265532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541328   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:32:36.541345   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:36.541372   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:36.541378   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:36.545532   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:36.546930   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546950   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546960   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546964   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546970   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546975   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546980   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:32:36.546984   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:32:36.546989   34720 node_conditions.go:105] duration metric: took 170.032136ms to run NodePressure ...
	I0930 11:32:36.547003   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:32:36.547027   34720 start.go:255] writing updated cluster config ...
	I0930 11:32:36.548771   34720 out.go:201] 
	I0930 11:32:36.549990   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:36.550071   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.551533   34720 out.go:177] * Starting "ha-033260-m04" worker node in "ha-033260" cluster
	I0930 11:32:36.552654   34720 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:32:36.552671   34720 cache.go:56] Caching tarball of preloaded images
	I0930 11:32:36.552768   34720 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:32:36.552782   34720 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:32:36.552887   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:36.553084   34720 start.go:360] acquireMachinesLock for ha-033260-m04: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:32:36.553130   34720 start.go:364] duration metric: took 26.329µs to acquireMachinesLock for "ha-033260-m04"
	I0930 11:32:36.553148   34720 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:32:36.553160   34720 fix.go:54] fixHost starting: m04
	I0930 11:32:36.553451   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:36.553481   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:36.569922   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33611
	I0930 11:32:36.570471   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:36.571044   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:36.571066   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:36.571377   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:36.571578   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:36.571759   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetState
	I0930 11:32:36.573541   34720 fix.go:112] recreateIfNeeded on ha-033260-m04: state=Stopped err=<nil>
	I0930 11:32:36.573570   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	W0930 11:32:36.573771   34720 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:32:36.575555   34720 out.go:177] * Restarting existing kvm2 VM for "ha-033260-m04" ...
	I0930 11:32:36.576772   34720 main.go:141] libmachine: (ha-033260-m04) Calling .Start
	I0930 11:32:36.576973   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring networks are active...
	I0930 11:32:36.577708   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network default is active
	I0930 11:32:36.578046   34720 main.go:141] libmachine: (ha-033260-m04) Ensuring network mk-ha-033260 is active
	I0930 11:32:36.578396   34720 main.go:141] libmachine: (ha-033260-m04) Getting domain xml...
	I0930 11:32:36.579052   34720 main.go:141] libmachine: (ha-033260-m04) Creating domain...
	I0930 11:32:37.876264   34720 main.go:141] libmachine: (ha-033260-m04) Waiting to get IP...
	I0930 11:32:37.877213   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:37.877645   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:37.877707   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:37.877598   36596 retry.go:31] will retry after 232.490022ms: waiting for machine to come up
	I0930 11:32:38.112070   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.112572   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.112594   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.112550   36596 retry.go:31] will retry after 256.882229ms: waiting for machine to come up
	I0930 11:32:38.371192   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.371815   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.371840   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.371754   36596 retry.go:31] will retry after 461.059855ms: waiting for machine to come up
	I0930 11:32:38.834060   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:38.834574   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:38.834602   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:38.834535   36596 retry.go:31] will retry after 561.972608ms: waiting for machine to come up
	I0930 11:32:39.398393   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:39.398837   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:39.398861   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:39.398804   36596 retry.go:31] will retry after 603.760478ms: waiting for machine to come up
	I0930 11:32:40.004623   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.004981   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.005003   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.004944   36596 retry.go:31] will retry after 795.659949ms: waiting for machine to come up
	I0930 11:32:40.802044   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:40.802482   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:40.802507   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:40.802432   36596 retry.go:31] will retry after 876.600506ms: waiting for machine to come up
	I0930 11:32:41.680956   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:41.681439   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:41.681475   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:41.681410   36596 retry.go:31] will retry after 1.356578507s: waiting for machine to come up
	I0930 11:32:43.039790   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:43.040245   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:43.040273   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:43.040181   36596 retry.go:31] will retry after 1.138308059s: waiting for machine to come up
	I0930 11:32:44.180454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:44.180880   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:44.180912   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:44.180838   36596 retry.go:31] will retry after 1.724095206s: waiting for machine to come up
	I0930 11:32:45.906969   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:45.907551   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:45.907580   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:45.907505   36596 retry.go:31] will retry after 2.79096153s: waiting for machine to come up
	I0930 11:32:48.699904   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:48.700403   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:48.700433   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:48.700358   36596 retry.go:31] will retry after 2.880773223s: waiting for machine to come up
	I0930 11:32:51.582182   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:51.582528   34720 main.go:141] libmachine: (ha-033260-m04) DBG | unable to find current IP address of domain ha-033260-m04 in network mk-ha-033260
	I0930 11:32:51.582553   34720 main.go:141] libmachine: (ha-033260-m04) DBG | I0930 11:32:51.582515   36596 retry.go:31] will retry after 3.567167233s: waiting for machine to come up
	I0930 11:32:55.151238   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.151679   34720 main.go:141] libmachine: (ha-033260-m04) Found IP for machine: 192.168.39.104
	I0930 11:32:55.151704   34720 main.go:141] libmachine: (ha-033260-m04) Reserving static IP address...
	I0930 11:32:55.151717   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has current primary IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.152141   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.152161   34720 main.go:141] libmachine: (ha-033260-m04) Reserved static IP address: 192.168.39.104
	I0930 11:32:55.152180   34720 main.go:141] libmachine: (ha-033260-m04) DBG | skip adding static IP to network mk-ha-033260 - found existing host DHCP lease matching {name: "ha-033260-m04", mac: "52:54:00:99:41:bc", ip: "192.168.39.104"}
	I0930 11:32:55.152198   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Getting to WaitForSSH function...
	I0930 11:32:55.152212   34720 main.go:141] libmachine: (ha-033260-m04) Waiting for SSH to be available...
	I0930 11:32:55.154601   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.154955   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.154984   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.155062   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH client type: external
	I0930 11:32:55.155094   34720 main.go:141] libmachine: (ha-033260-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa (-rw-------)
	I0930 11:32:55.155127   34720 main.go:141] libmachine: (ha-033260-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:32:55.155140   34720 main.go:141] libmachine: (ha-033260-m04) DBG | About to run SSH command:
	I0930 11:32:55.155169   34720 main.go:141] libmachine: (ha-033260-m04) DBG | exit 0
	I0930 11:32:55.282203   34720 main.go:141] libmachine: (ha-033260-m04) DBG | SSH cmd err, output: <nil>: 
	I0930 11:32:55.282534   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetConfigRaw
	I0930 11:32:55.283161   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.286073   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286485   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.286510   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.286784   34720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/config.json ...
	I0930 11:32:55.287029   34720 machine.go:93] provisionDockerMachine start ...
	I0930 11:32:55.287049   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:55.287272   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.289455   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.289920   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.289948   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.290156   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.290326   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290453   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.290576   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.290707   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.290900   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.290913   34720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:32:55.398165   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:32:55.398197   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398448   34720 buildroot.go:166] provisioning hostname "ha-033260-m04"
	I0930 11:32:55.398492   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.398697   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.401792   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.402275   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.402458   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.402629   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402793   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.402918   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.403113   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.403282   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.403294   34720 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-033260-m04 && echo "ha-033260-m04" | sudo tee /etc/hostname
	I0930 11:32:55.531966   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-033260-m04
	
	I0930 11:32:55.531997   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.535254   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535632   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.535675   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.535815   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.536008   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536169   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.536305   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.536447   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:55.536613   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:55.536629   34720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-033260-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-033260-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-033260-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:32:55.658892   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:32:55.658919   34720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:32:55.658936   34720 buildroot.go:174] setting up certificates
	I0930 11:32:55.658945   34720 provision.go:84] configureAuth start
	I0930 11:32:55.658953   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetMachineName
	I0930 11:32:55.659243   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:55.662312   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662773   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.662798   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.662957   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.665302   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665663   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.665690   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.665764   34720 provision.go:143] copyHostCerts
	I0930 11:32:55.665796   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665833   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:32:55.665842   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:32:55.665927   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:32:55.666021   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666039   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:32:55.666047   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:32:55.666074   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:32:55.666119   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666136   34720 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:32:55.666142   34720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:32:55.666164   34720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:32:55.666213   34720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.ha-033260-m04 san=[127.0.0.1 192.168.39.104 ha-033260-m04 localhost minikube]
	I0930 11:32:55.889392   34720 provision.go:177] copyRemoteCerts
	I0930 11:32:55.889469   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:32:55.889499   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:55.892080   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892386   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:55.892413   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:55.892551   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:55.892776   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:55.892978   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:55.893178   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:55.976164   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:32:55.976265   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 11:32:56.003465   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:32:56.003537   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:32:56.030648   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:32:56.030726   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:32:56.059845   34720 provision.go:87] duration metric: took 400.888299ms to configureAuth
	I0930 11:32:56.059878   34720 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:32:56.060173   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:56.060271   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.063160   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063613   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.063639   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.063847   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.064052   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064240   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.064367   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.064511   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.064690   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.064709   34720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:32:56.291657   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:32:56.291682   34720 machine.go:96] duration metric: took 1.004640971s to provisionDockerMachine
	I0930 11:32:56.291696   34720 start.go:293] postStartSetup for "ha-033260-m04" (driver="kvm2")
	I0930 11:32:56.291709   34720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:32:56.291730   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.292023   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:32:56.292057   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.294563   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.294915   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.294940   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.295103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.295280   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.295424   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.295532   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.385215   34720 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:32:56.389877   34720 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:32:56.389903   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:32:56.389972   34720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:32:56.390073   34720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:32:56.390086   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:32:56.390178   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:32:56.400442   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:56.429361   34720 start.go:296] duration metric: took 137.644684ms for postStartSetup
	I0930 11:32:56.429427   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.429716   34720 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0930 11:32:56.429741   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.432628   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433129   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.433159   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.433319   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.433495   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.433694   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.433867   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.520351   34720 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0930 11:32:56.520411   34720 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0930 11:32:56.579433   34720 fix.go:56] duration metric: took 20.026269147s for fixHost
	I0930 11:32:56.579489   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.582670   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583091   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.583121   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.583274   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.583494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583682   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.583865   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.584063   34720 main.go:141] libmachine: Using SSH client type: native
	I0930 11:32:56.584279   34720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0930 11:32:56.584294   34720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:32:56.698854   34720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727695976.655532462
	
	I0930 11:32:56.698887   34720 fix.go:216] guest clock: 1727695976.655532462
	I0930 11:32:56.698900   34720 fix.go:229] Guest: 2024-09-30 11:32:56.655532462 +0000 UTC Remote: 2024-09-30 11:32:56.579461897 +0000 UTC m=+453.306592605 (delta=76.070565ms)
	I0930 11:32:56.698920   34720 fix.go:200] guest clock delta is within tolerance: 76.070565ms
	I0930 11:32:56.698927   34720 start.go:83] releasing machines lock for "ha-033260-m04", held for 20.145784895s
	I0930 11:32:56.698949   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.699224   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:56.702454   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.702852   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.702883   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.705376   34720 out.go:177] * Found network options:
	I0930 11:32:56.706947   34720 out.go:177]   - NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	W0930 11:32:56.708247   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708274   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.708287   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.708308   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.708969   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709162   34720 main.go:141] libmachine: (ha-033260-m04) Calling .DriverName
	I0930 11:32:56.709279   34720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:32:56.709323   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	W0930 11:32:56.709360   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709386   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0930 11:32:56.709401   34720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0930 11:32:56.709475   34720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:32:56.709494   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHHostname
	I0930 11:32:56.712173   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712335   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712568   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712592   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712731   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.712845   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:56.712870   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:56.712874   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.712987   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHPort
	I0930 11:32:56.713033   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713103   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHKeyPath
	I0930 11:32:56.713168   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.713207   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetSSHUsername
	I0930 11:32:56.713330   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/ha-033260-m04/id_rsa Username:docker}
	I0930 11:32:56.934813   34720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:32:56.941348   34720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:32:56.941419   34720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:32:56.960961   34720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:32:56.960992   34720 start.go:495] detecting cgroup driver to use...
	I0930 11:32:56.961081   34720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:32:56.980594   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:32:56.996216   34720 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:32:56.996273   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:32:57.013214   34720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:32:57.028755   34720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:32:57.149354   34720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:32:57.318133   34720 docker.go:233] disabling docker service ...
	I0930 11:32:57.318197   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:32:57.334364   34720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:32:57.349711   34720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:32:57.496565   34720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:32:57.627318   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:32:57.643513   34720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:32:57.667655   34720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:32:57.667720   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.680838   34720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:32:57.680907   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.693421   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.705291   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.717748   34720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:32:57.730805   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.742351   34720 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.761934   34720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:32:57.773112   34720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:32:57.783201   34720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:32:57.783257   34720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:32:57.797812   34720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:32:57.813538   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:57.938077   34720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:32:58.044521   34720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:32:58.044587   34720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:32:58.049533   34720 start.go:563] Will wait 60s for crictl version
	I0930 11:32:58.049596   34720 ssh_runner.go:195] Run: which crictl
	I0930 11:32:58.053988   34720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:32:58.101662   34720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:32:58.101732   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.132323   34720 ssh_runner.go:195] Run: crio --version
	I0930 11:32:58.163597   34720 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:32:58.164981   34720 out.go:177]   - env NO_PROXY=192.168.39.249
	I0930 11:32:58.166271   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3
	I0930 11:32:58.167862   34720 out.go:177]   - env NO_PROXY=192.168.39.249,192.168.39.3,192.168.39.238
	I0930 11:32:58.169165   34720 main.go:141] libmachine: (ha-033260-m04) Calling .GetIP
	I0930 11:32:58.172162   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172529   34720 main.go:141] libmachine: (ha-033260-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:41:bc", ip: ""} in network mk-ha-033260: {Iface:virbr1 ExpiryTime:2024-09-30 12:32:48 +0000 UTC Type:0 Mac:52:54:00:99:41:bc Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-033260-m04 Clientid:01:52:54:00:99:41:bc}
	I0930 11:32:58.172550   34720 main.go:141] libmachine: (ha-033260-m04) DBG | domain ha-033260-m04 has defined IP address 192.168.39.104 and MAC address 52:54:00:99:41:bc in network mk-ha-033260
	I0930 11:32:58.172762   34720 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:32:58.178993   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.194096   34720 mustload.go:65] Loading cluster: ha-033260
	I0930 11:32:58.194385   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.194741   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.194790   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.210665   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0930 11:32:58.211101   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.211610   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.211628   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.211954   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.212130   34720 main.go:141] libmachine: (ha-033260) Calling .GetState
	I0930 11:32:58.213485   34720 host.go:66] Checking if "ha-033260" exists ...
	I0930 11:32:58.213820   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:32:58.213854   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:32:58.228447   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34889
	I0930 11:32:58.228877   34720 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:32:58.229355   34720 main.go:141] libmachine: Using API Version  1
	I0930 11:32:58.229373   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:32:58.229837   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:32:58.230027   34720 main.go:141] libmachine: (ha-033260) Calling .DriverName
	I0930 11:32:58.230180   34720 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260 for IP: 192.168.39.104
	I0930 11:32:58.230191   34720 certs.go:194] generating shared ca certs ...
	I0930 11:32:58.230204   34720 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:32:58.230340   34720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:32:58.230387   34720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:32:58.230397   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:32:58.230409   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:32:58.230422   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:32:58.230434   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:32:58.230491   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:32:58.230521   34720 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:32:58.230531   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:32:58.230554   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:32:58.230577   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:32:58.230597   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:32:58.230650   34720 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:32:58.230688   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.230705   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.230732   34720 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.230759   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:32:58.258115   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:32:58.284212   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:32:58.311332   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:32:58.336428   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:32:58.362719   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:32:58.389689   34720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:32:58.416593   34720 ssh_runner.go:195] Run: openssl version
	I0930 11:32:58.423417   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:32:58.435935   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442361   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.442428   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:32:58.448829   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:32:58.461056   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:32:58.473436   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478046   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.478120   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:32:58.484917   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:32:58.497497   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:32:58.509506   34720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514695   34720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.514766   34720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:32:58.521000   34720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:32:58.533195   34720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:32:58.538066   34720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 11:32:58.538108   34720 kubeadm.go:934] updating node {m04 192.168.39.104 0 v1.31.1 crio false true} ...
	I0930 11:32:58.538196   34720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-033260-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-033260 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:32:58.538246   34720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:32:58.549564   34720 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:32:58.549678   34720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0930 11:32:58.561086   34720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0930 11:32:58.581046   34720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:32:58.599680   34720 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0930 11:32:58.603972   34720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:32:58.618040   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.758745   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.778316   34720 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.39.104 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0930 11:32:58.778666   34720 config.go:182] Loaded profile config "ha-033260": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:32:58.780417   34720 out.go:177] * Verifying Kubernetes components...
	I0930 11:32:58.781848   34720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:32:58.954652   34720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:32:58.980788   34720 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:32:58.981140   34720 kapi.go:59] client config for ha-033260: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/ha-033260/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0930 11:32:58.981229   34720 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.249:8443
	I0930 11:32:58.981531   34720 node_ready.go:35] waiting up to 6m0s for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:58.981654   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:58.981668   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:58.981678   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:58.981682   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:58.985441   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.482501   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:32:59.482522   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.482530   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.482534   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.485809   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.486316   34720 node_ready.go:49] node "ha-033260-m04" has status "Ready":"True"
	I0930 11:32:59.486339   34720 node_ready.go:38] duration metric: took 504.792648ms for node "ha-033260-m04" to be "Ready" ...
	I0930 11:32:59.486347   34720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:32:59.486421   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods
	I0930 11:32:59.486437   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.486444   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.486448   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.491643   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:32:59.500880   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.501000   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-5frmm
	I0930 11:32:59.501020   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.501033   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.501040   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.504511   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.505105   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.505120   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.505126   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.505130   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.508330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.508816   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.508834   34720 pod_ready.go:82] duration metric: took 7.916953ms for pod "coredns-7c65d6cfc9-5frmm" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508846   34720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.508911   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kt87v
	I0930 11:32:59.508921   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.508931   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.508940   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.512254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.513133   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.513147   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.513157   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.513162   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.516730   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.517273   34720 pod_ready.go:93] pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.517290   34720 pod_ready.go:82] duration metric: took 8.437165ms for pod "coredns-7c65d6cfc9-kt87v" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517301   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.517361   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260
	I0930 11:32:59.517370   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.517380   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.517387   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521073   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:32:59.521748   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:32:59.521764   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.521772   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.521776   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.524702   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.525300   34720 pod_ready.go:93] pod "etcd-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.525316   34720 pod_ready.go:82] duration metric: took 8.008761ms for pod "etcd-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525325   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.525375   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m02
	I0930 11:32:59.525383   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.525390   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.525393   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.528314   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.528898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:32:59.528914   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.528924   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.528930   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.531717   34720 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0930 11:32:59.532229   34720 pod_ready.go:93] pod "etcd-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.532246   34720 pod_ready.go:82] duration metric: took 6.914296ms for pod "etcd-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.532257   34720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.682582   34720 request.go:632] Waited for 150.25854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682645   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-033260-m03
	I0930 11:32:59.682651   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.682658   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.682662   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.689539   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:32:59.883130   34720 request.go:632] Waited for 192.41473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883192   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:32:59.883200   34720 round_trippers.go:469] Request Headers:
	I0930 11:32:59.883210   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:32:59.883232   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:32:59.887618   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:32:59.888108   34720 pod_ready.go:93] pod "etcd-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:32:59.888129   34720 pod_ready.go:82] duration metric: took 355.865471ms for pod "etcd-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:32:59.888150   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.083448   34720 request.go:632] Waited for 195.22183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083541   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260
	I0930 11:33:00.083549   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.083560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.083571   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.087440   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.283491   34720 request.go:632] Waited for 195.322885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:00.283581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.283590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.283596   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.287218   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.287959   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.287982   34720 pod_ready.go:82] duration metric: took 399.823014ms for pod "kube-apiserver-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.287995   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.483353   34720 request.go:632] Waited for 195.279455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483436   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m02
	I0930 11:33:00.483446   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.483457   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.483468   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.487640   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:00.682537   34720 request.go:632] Waited for 194.177349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682615   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:00.682623   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.682632   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.682641   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.686128   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:00.686721   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:00.686744   34720 pod_ready.go:82] duration metric: took 398.740461ms for pod "kube-apiserver-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.686757   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:00.882895   34720 request.go:632] Waited for 196.06624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882951   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-033260-m03
	I0930 11:33:00.882956   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:00.882963   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:00.882967   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:00.887704   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.082816   34720 request.go:632] Waited for 194.378573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082898   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:01.082908   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.082920   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.082928   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.086938   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.088023   34720 pod_ready.go:93] pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.088045   34720 pod_ready.go:82] duration metric: took 401.279304ms for pod "kube-apiserver-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.088058   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.283083   34720 request.go:632] Waited for 194.957282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283183   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260
	I0930 11:33:01.283198   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.283211   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.283221   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.288754   34720 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0930 11:33:01.482812   34720 request.go:632] Waited for 193.21938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482876   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:01.482883   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.482895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.482906   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.487184   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.488013   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.488035   34720 pod_ready.go:82] duration metric: took 399.968755ms for pod "kube-controller-manager-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.488047   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.682796   34720 request.go:632] Waited for 194.675415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682878   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m02
	I0930 11:33:01.682885   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.682895   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.682903   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.687354   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:01.883473   34720 request.go:632] Waited for 195.37133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883544   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:01.883551   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:01.883560   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:01.883565   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:01.887254   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:01.887998   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:01.888020   34720 pod_ready.go:82] duration metric: took 399.964872ms for pod "kube-controller-manager-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:01.888033   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.082969   34720 request.go:632] Waited for 194.870325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083045   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-033260-m03
	I0930 11:33:02.083051   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.083059   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.083071   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.087791   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.283169   34720 request.go:632] Waited for 194.361368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283289   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:02.283304   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.283331   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.283350   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.289541   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:02.290706   34720 pod_ready.go:93] pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:02.290729   34720 pod_ready.go:82] duration metric: took 402.687198ms for pod "kube-controller-manager-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.290741   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:02.483158   34720 request.go:632] Waited for 192.351675ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483216   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.483222   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.483229   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.483233   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.487135   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:02.683325   34720 request.go:632] Waited for 195.063306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683451   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:02.683485   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.683516   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.683525   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.687678   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:02.883237   34720 request.go:632] Waited for 92.265907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883323   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:02.883335   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:02.883343   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:02.883351   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:02.887580   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.082785   34720 request.go:632] Waited for 194.294379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082857   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.082862   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.082872   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.082876   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.086700   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.291740   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cr58q
	I0930 11:33:03.291767   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.291777   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.291783   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.295392   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.483576   34720 request.go:632] Waited for 187.437599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483647   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m04
	I0930 11:33:03.483655   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.483667   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.483677   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.487588   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:03.488048   34720 pod_ready.go:93] pod "kube-proxy-cr58q" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.488067   34720 pod_ready.go:82] duration metric: took 1.197317957s for pod "kube-proxy-cr58q" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.488076   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.683488   34720 request.go:632] Waited for 195.341906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683573   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fckwn
	I0930 11:33:03.683581   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.683590   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.683597   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.687625   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.882797   34720 request.go:632] Waited for 194.279012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882884   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:03.882896   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:03.882906   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:03.882924   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:03.886967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:03.887827   34720 pod_ready.go:93] pod "kube-proxy-fckwn" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:03.887857   34720 pod_ready.go:82] duration metric: took 399.773896ms for pod "kube-proxy-fckwn" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:03.887870   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.082926   34720 request.go:632] Waited for 194.972094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083017   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fctld
	I0930 11:33:04.083025   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.083037   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.083041   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.087402   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.283534   34720 request.go:632] Waited for 194.922082ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283613   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:04.283619   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.283626   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.283630   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.287420   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:04.288067   34720 pod_ready.go:93] pod "kube-proxy-fctld" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.288124   34720 pod_ready.go:82] duration metric: took 400.245815ms for pod "kube-proxy-fctld" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.288141   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.483212   34720 request.go:632] Waited for 194.995215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483277   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mxvxr
	I0930 11:33:04.483290   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.483319   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.483325   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.487831   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.682773   34720 request.go:632] Waited for 194.183233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682836   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:04.682843   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.682854   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.682858   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.686967   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:04.687793   34720 pod_ready.go:93] pod "kube-proxy-mxvxr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:04.687819   34720 pod_ready.go:82] duration metric: took 399.669055ms for pod "kube-proxy-mxvxr" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.687836   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:04.882848   34720 request.go:632] Waited for 194.931159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882922   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260
	I0930 11:33:04.882930   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:04.882942   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:04.882951   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:04.886911   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.083280   34720 request.go:632] Waited for 195.375329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083376   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260
	I0930 11:33:05.083387   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.083398   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.083407   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.086880   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.087419   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.087441   34720 pod_ready.go:82] duration metric: took 399.596031ms for pod "kube-scheduler-ha-033260" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.087453   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.282500   34720 request.go:632] Waited for 194.956546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282556   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m02
	I0930 11:33:05.282561   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.282568   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.282582   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.285978   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.482968   34720 request.go:632] Waited for 196.156247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483125   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m02
	I0930 11:33:05.483139   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.483149   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.483155   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.489591   34720 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0930 11:33:05.490240   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.490263   34720 pod_ready.go:82] duration metric: took 402.801252ms for pod "kube-scheduler-ha-033260-m02" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.490276   34720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.683160   34720 request.go:632] Waited for 192.80812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683317   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-033260-m03
	I0930 11:33:05.683345   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.683360   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.683366   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.687330   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.883447   34720 request.go:632] Waited for 195.335552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883523   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes/ha-033260-m03
	I0930 11:33:05.883530   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:05.883545   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:05.883553   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:05.887272   34720 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0930 11:33:05.888002   34720 pod_ready.go:93] pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace has status "Ready":"True"
	I0930 11:33:05.888020   34720 pod_ready.go:82] duration metric: took 397.737135ms for pod "kube-scheduler-ha-033260-m03" in "kube-system" namespace to be "Ready" ...
	I0930 11:33:05.888031   34720 pod_ready.go:39] duration metric: took 6.401673703s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:33:05.888048   34720 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:33:05.888099   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:33:05.905331   34720 system_svc.go:56] duration metric: took 17.278667ms WaitForService to wait for kubelet
	I0930 11:33:05.905363   34720 kubeadm.go:582] duration metric: took 7.126999309s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:33:05.905382   34720 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:33:06.082680   34720 request.go:632] Waited for 177.227376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082733   34720 round_trippers.go:463] GET https://192.168.39.249:8443/api/v1/nodes
	I0930 11:33:06.082739   34720 round_trippers.go:469] Request Headers:
	I0930 11:33:06.082746   34720 round_trippers.go:473]     Accept: application/json, */*
	I0930 11:33:06.082751   34720 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0930 11:33:06.087224   34720 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0930 11:33:06.088896   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088918   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088929   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088932   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088935   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088939   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088942   34720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:33:06.088945   34720 node_conditions.go:123] node cpu capacity is 2
	I0930 11:33:06.088948   34720 node_conditions.go:105] duration metric: took 183.562454ms to run NodePressure ...
	I0930 11:33:06.088959   34720 start.go:241] waiting for startup goroutines ...
	I0930 11:33:06.088977   34720 start.go:255] writing updated cluster config ...
	I0930 11:33:06.089268   34720 ssh_runner.go:195] Run: rm -f paused
	I0930 11:33:06.143377   34720 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 11:33:06.145486   34720 out.go:177] * Done! kubectl is now configured to use "ha-033260" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.731038999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696078731014589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b456984d-a881-4700-9f7d-f789d96593d8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.731503631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1f3bd8e-341d-432e-aeaf-3570f08c4ce0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.731574339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1f3bd8e-341d-432e-aeaf-3570f08c4ce0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.731958525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1f3bd8e-341d-432e-aeaf-3570f08c4ce0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.776401456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f39c188e-a34e-4a27-bc6c-849839b16b21 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.776498563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f39c188e-a34e-4a27-bc6c-849839b16b21 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.777824182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ea02f58-3af8-4e08-bb83-51c8e3bdc6e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.778563448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696078778298255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ea02f58-3af8-4e08-bb83-51c8e3bdc6e9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.779141063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3791f5fe-06f7-4365-8623-fbccd2fd12b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.779196763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3791f5fe-06f7-4365-8623-fbccd2fd12b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.779540637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3791f5fe-06f7-4365-8623-fbccd2fd12b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.829642592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7aceee8-7434-4c15-8653-8873f478ebb4 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.829775221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7aceee8-7434-4c15-8653-8873f478ebb4 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.831180741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a6773b5-aa3d-4b26-83db-a284d29a6e1f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.831728901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696078831704218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a6773b5-aa3d-4b26-83db-a284d29a6e1f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.832440707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ac81638-409b-490e-8d48-a745cdcbaec6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.832499656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ac81638-409b-490e-8d48-a745cdcbaec6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.833017229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ac81638-409b-490e-8d48-a745cdcbaec6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.882059963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c86a8c5-8e54-49f1-ad3e-cb204ef1018f name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.882132636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c86a8c5-8e54-49f1-ad3e-cb204ef1018f name=/runtime.v1.RuntimeService/Version
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.883283651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d90af727-33d9-421c-8f96-6a66b79546c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.885147791Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696078885108544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d90af727-33d9-421c-8f96-6a66b79546c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.885991264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f78daf4-2af1-418b-8f91-a924836209f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.886049049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f78daf4-2af1-418b-8f91-a924836209f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:34:38 ha-033260 crio[1037]: time="2024-09-30 11:34:38.886441741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88e9d994261ce483cde12a1e90ccb509a74f712d93cdef5e3de0e94fc31f6ea9,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727695919460224427,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3f12d455b8e9ed2884f3c21c48e97c428032fa12ea69f6165e2f04a5118f45,PodSandboxId:80de34a6f14caa0af3a20ad4fbd4c89cfd4d4e24e7219f3156db87e3680d1921,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727695890221985053,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-nbhwc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62e1e44-3723-496c-85a3-7a79e9c8264b,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe,PodSandboxId:40863d7ac6437d071076751feb5985dd56b1b1d95e20251672530a697c2b0c27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727695889084388870,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g94k6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 260e385d-9e17-4af8-a854-8683afb714c4,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5,PodSandboxId:96e86b12ad9b728eafc7f62a74c8c066cfd2a67a555877cbf3bac75b6771db16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727695888956714482,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mxvxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 314da0b5-6242-4af0-8e99-d0aaba82a43e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6,PodSandboxId:74bab7f17b06bdf88ae4923268f6142b849f5df90a4bdba664d76ce78800400a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888808431924,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kt87v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f75c31-d44d-4a4c-8048-b6ce5c824151,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"nam
e\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7,PodSandboxId:f6863e18fb197b2b6dd80b5e638c1b7240787cc5c13bfc7de76a510a048db786,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727695888734189643,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5frmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7333717d-95d5-4990-bac9-8443a51eee97,},Annotations:map[string]string{io.kuber
netes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4c792280b15b102c1a6a515c1f412a8e20496b84279d93dad9e82595e825519,PodSandboxId:d40067a91d08355576bb0487a4378a50b93a4004b896136d8ac017df775f245b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727695888647968247,Labels:map[string]string{io.kubernetes.containe
r.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964381ab-f2ac-4361-a7e0-5212fff5e26e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727695882255243102,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727695867192828888,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf743c3bfec107bd3690603081cc163bb1451f68f726785882769a84b70d3114,PodSandboxId:bfb2a9b6e2e5abc702010751ef90db7081f7b0ebfeb6e1d3184e5c2118b36473,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727695847258168732,Labels:map[string]string{io.kubernetes.container.nam
e: kube-vip,io.kubernetes.pod.name: kube-vip-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89735afe181ab1f81ff05fc69dd5d08e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1,PodSandboxId:498808de720758f1c61b761ddd612131ddfd9e54da617dd49d3d5478d97c99ea,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727695844803934371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.po
d.name: kube-apiserver-ha-033260,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c1732ebd63e52d0c6ac6d9cd648cff5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199,PodSandboxId:5d3f45272bb025187e3cf3ec97188e0594f0d2c06be766d612d35cdfba44d1ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727695844755790998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-033260,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 4ee6f0cb154890b5d1bf6173256957d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40,PodSandboxId:aeafc6ee55a4d3a78046072fa32c526be5555deae23e60ed33a5ae83612d1069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727695844741856819,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-033260,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 734999721cb3f48c24354599fcaf3db2,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438,PodSandboxId:1eee82fccc84c1292b72e814ce4304ebf5b8af0b05ce52092c5a5bc3338e6ad4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727695844683487548,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-033260,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 43955f8cf95999657a88952585c93768,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f78daf4-2af1-418b-8f91-a924836209f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	88e9d994261ce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       5                   d40067a91d083       storage-provisioner
	df3f12d455b8e       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   2                   80de34a6f14ca       busybox-7dff88458-nbhwc
	1937cce4ac070       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               2                   40863d7ac6437       kindnet-g94k6
	447147b39349f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      3 minutes ago       Running             kube-proxy                2                   96e86b12ad9b7       kube-proxy-mxvxr
	d33c75c18e088       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   2                   74bab7f17b06b       coredns-7c65d6cfc9-kt87v
	88e2f3c9b905b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago       Running             coredns                   2                   f6863e18fb197       coredns-7c65d6cfc9-5frmm
	f4c792280b15b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       4                   d40067a91d083       storage-provisioner
	487866f095e01       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago       Running             kube-controller-manager   4                   1eee82fccc84c       kube-controller-manager-ha-033260
	6ea8bba210502       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago       Running             kube-apiserver            4                   498808de72075       kube-apiserver-ha-033260
	bf743c3bfec10       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     3 minutes ago       Running             kube-vip                  1                   bfb2a9b6e2e5a       kube-vip-ha-033260
	91514ddf1467c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      3 minutes ago       Exited              kube-apiserver            3                   498808de72075       kube-apiserver-ha-033260
	b2e1a261e4464       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago       Running             etcd                      2                   5d3f45272bb02       etcd-ha-033260
	fd2ffaa7ff33d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      3 minutes ago       Running             kube-scheduler            2                   aeafc6ee55a4d       kube-scheduler-ha-033260
	9f9c8e0b4eb8f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      3 minutes ago       Exited              kube-controller-manager   3                   1eee82fccc84c       kube-controller-manager-ha-033260
	
	
	==> coredns [88e2f3c9b905bbfbea773554d4153e06e516e2857c16eba0a6c3858a1d3151c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:60977 - 56023 "HINFO IN 6022066924044087929.8494370084378227503. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030589997s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1363673838]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.175) (total time: 30002ms):
	Trace[1363673838]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:31:59.176)
	Trace[1363673838]: [30.00230997s] [30.00230997s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1452341617]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30003ms):
	Trace[1452341617]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1452341617]: [30.0032564s] [30.0032564s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1546520065]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1546520065]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1546520065]: [30.002775951s] [30.002775951s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [d33c75c18e08871e23b88e17b78e93fb13535899f1c18385bdacee7310b13ed6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44743 - 60294 "HINFO IN 2203689339262482561.411210931008286347. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030703121s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469308931]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[469308931]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.176)
	Trace[469308931]: [30.002568999s] [30.002568999s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1100740362]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.174) (total time: 30002ms):
	Trace[1100740362]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.177)
	Trace[1100740362]: [30.002476509s] [30.002476509s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1653957079]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (30-Sep-2024 11:31:29.176) (total time: 30002ms):
	Trace[1653957079]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (11:31:59.178)
	Trace[1653957079]: [30.002259084s] [30.002259084s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-033260
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:11:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:11:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:31:45 +0000   Mon, 30 Sep 2024 11:31:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-033260
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 285e64dc8d10442694303513a400e333
	  System UUID:                285e64dc-8d10-4426-9430-3513a400e333
	  Boot ID:                    819b9c53-0125-4e30-b11d-f0c734cdb490
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nbhwc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-5frmm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-7c65d6cfc9-kt87v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-ha-033260                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kindnet-g94k6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	  kube-system                 kube-apiserver-ha-033260             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-033260    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-mxvxr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-033260             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-033260                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 22m                  kube-proxy       
	  Normal  Starting                 3m9s                 kube-proxy       
	  Normal  Starting                 22m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  NodeReady                22m                  kubelet          Node ha-033260 status is now: NodeReady
	  Normal  RegisteredNode           21m                  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           20m                  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  Starting                 4m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m1s (x8 over 4m1s)  kubelet          Node ha-033260 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x8 over 4m1s)  kubelet          Node ha-033260 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x7 over 4m1s)  kubelet          Node ha-033260 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m15s                node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           3m14s                node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           2m15s                node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	  Normal  RegisteredNode           26s                  node-controller  Node ha-033260 event: Registered Node ha-033260 in Controller
	
	
	Name:               ha-033260-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_12_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:12:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:04 +0000   Mon, 30 Sep 2024 11:31:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.3
	  Hostname:    ha-033260-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1504aa96b0e7414e83ec57ce754ea274
	  System UUID:                1504aa96-b0e7-414e-83ec-57ce754ea274
	  Boot ID:                    c982302c-6e81-49de-9ba4-9fad6b0527be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-748nr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ha-033260-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-752cm                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-033260-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-033260-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-fckwn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-033260-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-033260-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 21m                    kube-proxy       
	  Normal  Starting                 3m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)      kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)      kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)      kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           21m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           20m                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  NodeNotReady             18m                    node-controller  Node ha-033260-m02 status is now: NodeNotReady
	  Normal  Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m37s (x8 over 3m38s)  kubelet          Node ha-033260-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m37s (x8 over 3m38s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m37s (x7 over 3m38s)  kubelet          Node ha-033260-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	  Normal  RegisteredNode           26s                    node-controller  Node ha-033260-m02 event: Registered Node ha-033260-m02 in Controller
	
	
	Name:               ha-033260-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_14_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:14:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:32:34 +0000   Mon, 30 Sep 2024 11:14:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-033260-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 581b37e2b76245bf813ddd1801a6b9a3
	  System UUID:                581b37e2-b762-45bf-813d-dd1801a6b9a3
	  Boot ID:                    0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rkczc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ha-033260-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         20m
	  kube-system                 kindnet-4rpgw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	  kube-system                 kube-apiserver-ha-033260-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-ha-033260-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-fctld                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-ha-033260-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-vip-ha-033260-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m19s              kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           20m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           20m                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           3m15s              node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           3m14s              node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   Starting                 2m36s              kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m35s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m35s              kubelet          Node ha-033260-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s              kubelet          Node ha-033260-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s              kubelet          Node ha-033260-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m35s              kubelet          Node ha-033260-m03 has been rebooted, boot id: 0c35b92a-eb4b-47a6-b3cf-ae8fef309d67
	  Normal   RegisteredNode           2m15s              node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	  Normal   RegisteredNode           26s                node-controller  Node ha-033260-m03 event: Registered Node ha-033260-m03 in Controller
	
	
	Name:               ha-033260-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:33:29 +0000   Mon, 30 Sep 2024 11:32:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-033260-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f7e5ab5969e49808de6a4938b82b604
	  System UUID:                3f7e5ab5-969e-4980-8de6-a4938b82b604
	  Boot ID:                    5c8fe13a-3363-443e-bb87-2dda804740af
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kb2cp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-cr58q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 19m                  kube-proxy       
	  Normal   Starting                 96s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  19m (x2 over 19m)    kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m (x2 over 19m)    kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m (x2 over 19m)    kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           19m                  node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           19m                  node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeReady                19m                  kubelet          Node ha-033260-m04 status is now: NodeReady
	  Normal   RegisteredNode           3m15s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   RegisteredNode           3m14s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   NodeNotReady             2m35s                node-controller  Node ha-033260-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           2m15s                node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	  Normal   Starting                 100s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  100s (x2 over 100s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s (x2 over 100s)  kubelet          Node ha-033260-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s (x2 over 100s)  kubelet          Node ha-033260-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 100s                 kubelet          Node ha-033260-m04 has been rebooted, boot id: 5c8fe13a-3363-443e-bb87-2dda804740af
	  Normal   NodeReady                100s                 kubelet          Node ha-033260-m04 status is now: NodeReady
	  Normal   RegisteredNode           26s                  node-controller  Node ha-033260-m04 event: Registered Node ha-033260-m04 in Controller
	
	
	Name:               ha-033260-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-033260-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=ha-033260
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_34_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:34:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-033260-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:34:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:34:33 +0000   Mon, 30 Sep 2024 11:34:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.146
	  Hostname:    ha-033260-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b0444cb38648a79a2155d7dbdd1774
	  System UUID:                82b0444c-b386-48a7-9a21-55d7dbdd1774
	  Boot ID:                    26c588a2-1adf-44af-9d60-2a708fb03f44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-033260-m05                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         34s
	  kube-system                 kindnet-9bn6h                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      36s
	  kube-system                 kube-apiserver-ha-033260-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-ha-033260-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-6ddjb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-ha-033260-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-vip-ha-033260-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node ha-033260-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node ha-033260-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x7 over 37s)  kubelet          Node ha-033260-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           35s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	  Normal  RegisteredNode           35s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	  Normal  RegisteredNode           34s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	  Normal  RegisteredNode           26s                node-controller  Node ha-033260-m05 event: Registered Node ha-033260-m05 in Controller
	
	
	==> dmesg <==
	[Sep30 11:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051485] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.894871] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.799819] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.926902] systemd-fstab-generator[960]: Ignoring "noauto" option for root device
	[  +0.063947] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060890] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	[  +0.189706] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.143881] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.315063] systemd-fstab-generator[1028]: Ignoring "noauto" option for root device
	[  +4.231701] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.066662] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.898522] kauditd_printk_skb: 29 callbacks suppressed
	[ +10.432816] kauditd_printk_skb: 40 callbacks suppressed
	[Sep30 11:31] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [b2e1a261e44643d70b1aa9a65e5f516354b5dc09cc5e5b9150d744263bac1199] <==
	{"level":"info","ts":"2024-09-30T11:34:03.220627Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547","added-peer-id":"182fb6b050f82820","added-peer-peer-urls":["https://192.168.39.146:2380"]}
	{"level":"info","ts":"2024-09-30T11:34:03.220703Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.220755Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.225150Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.225266Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820","remote-peer-urls":["https://192.168.39.146:2380"]}
	{"level":"info","ts":"2024-09-30T11:34:03.227877Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.228212Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.228481Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:03.229273Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"warn","ts":"2024-09-30T11:34:03.396992Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-09-30T11:34:04.392189Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-30T11:34:04.799407Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"182fb6b050f82820","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-30T11:34:04.799456Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:04.799510Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:04.809420Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"318ee90c3446d547","to":"182fb6b050f82820","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-30T11:34:04.809478Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"warn","ts":"2024-09-30T11:34:04.883795Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-30T11:34:04.924487Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"info","ts":"2024-09-30T11:34:04.926745Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"318ee90c3446d547","remote-peer-id":"182fb6b050f82820"}
	{"level":"warn","ts":"2024-09-30T11:34:05.051491Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.146:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-09-30T11:34:05.070541Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.146:49184","server-name":"","error":"read tcp 192.168.39.249:2380->192.168.39.146:49184: read: connection reset by peer"}
	{"level":"warn","ts":"2024-09-30T11:34:05.880660Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"182fb6b050f82820","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-30T11:34:06.382681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 switched to configuration voters=(1742812449204611104 2179423914693294938 3571047793177318727 18390992626900585602)"}
	{"level":"info","ts":"2024-09-30T11:34:06.382967Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547"}
	{"level":"info","ts":"2024-09-30T11:34:06.383056Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"318ee90c3446d547","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"182fb6b050f82820"}
	
	
	==> kernel <==
	 11:34:39 up 4 min,  0 users,  load average: 0.14, 0.24, 0.11
	Linux ha-033260 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1937cce4ac0701ce90fbb4b75e4dc83562f2393963124754fa037660f732edbe] <==
	I0930 11:34:10.501805       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:34:10.501972       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:34:10.502027       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:34:10.502174       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:34:10.502241       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:34:20.503444       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:34:20.503553       1 main.go:299] handling current node
	I0930 11:34:20.503584       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:34:20.503602       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:34:20.503752       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:34:20.503806       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:34:20.503918       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:34:20.503956       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:34:20.504030       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0930 11:34:20.504050       1 main.go:322] Node ha-033260-m05 has CIDR [10.244.4.0/24] 
	I0930 11:34:30.500054       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0930 11:34:30.500578       1 main.go:299] handling current node
	I0930 11:34:30.500677       1 main.go:295] Handling node with IPs: map[192.168.39.3:{}]
	I0930 11:34:30.500727       1 main.go:322] Node ha-033260-m02 has CIDR [10.244.1.0/24] 
	I0930 11:34:30.501004       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0930 11:34:30.501041       1 main.go:322] Node ha-033260-m03 has CIDR [10.244.2.0/24] 
	I0930 11:34:30.501140       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0930 11:34:30.501166       1 main.go:322] Node ha-033260-m04 has CIDR [10.244.3.0/24] 
	I0930 11:34:30.501254       1 main.go:295] Handling node with IPs: map[192.168.39.146:{}]
	I0930 11:34:30.501285       1 main.go:322] Node ha-033260-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [6ea8bba210502ee119aff2e2f5a8a9eb9739969eaaa38dc15e5b660061166e7c] <==
	I0930 11:31:21.381575       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0930 11:31:21.538562       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 11:31:21.543182       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:31:21.543721       1 policy_source.go:224] refreshing policies
	I0930 11:31:21.579575       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:31:21.579665       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 11:31:21.580585       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 11:31:21.581145       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 11:31:21.581189       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 11:31:21.579601       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 11:31:21.579657       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 11:31:21.581999       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 11:31:21.582037       1 aggregator.go:171] initial CRD sync complete...
	I0930 11:31:21.582044       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 11:31:21.582048       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 11:31:21.582053       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:31:21.586437       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0930 11:31:21.607643       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238]
	I0930 11:31:21.609050       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 11:31:21.622457       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0930 11:31:21.631794       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0930 11:31:21.643397       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:31:22.390935       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0930 11:31:22.949170       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.238 192.168.39.249]
	W0930 11:31:42.954664       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249 192.168.39.3]
	
	
	==> kube-apiserver [91514ddf1467cefe18bde09c55a55293fad07dce469f221f9c425c97793c23f1] <==
	I0930 11:30:45.187556       1 options.go:228] external host was not specified, using 192.168.39.249
	I0930 11:30:45.195121       1 server.go:142] Version: v1.31.1
	I0930 11:30:45.195252       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.676469       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0930 11:30:46.702385       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:30:46.710100       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0930 11:30:46.716179       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0930 11:30:46.716589       1 instance.go:232] Using reconciler: lease
	W0930 11:31:06.661936       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.662284       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0930 11:31:06.717971       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0930 11:31:06.718008       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [487866f095e014c491d8322854f012a40f175e6f44a60c84a62e70963ae2741a] <==
	I0930 11:32:59.248647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:32:59.651723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:33:29.535408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m04"
	I0930 11:34:03.020603       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-033260-m05\" does not exist"
	I0930 11:34:03.024566       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:34:03.049032       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-033260-m05" podCIDRs=["10.244.4.0/24"]
	I0930 11:34:03.049178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:03.049244       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:03.070085       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:03.116559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:04.752051       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-033260-m05"
	I0930 11:34:04.778067       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:05.552860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:05.654561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:07.053887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:07.684022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:07.832721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:13.147655       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:13.258108       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:13.396911       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:23.927584       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-033260-m04"
	I0930 11:34:23.927600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:23.948773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:24.684227       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	I0930 11:34:33.535177       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-033260-m05"
	
	
	==> kube-controller-manager [9f9c8e0b4eb8f2ccccccfad095c7ae98713b410f10242b33f628b62252718438] <==
	I0930 11:30:45.993698       1 serving.go:386] Generated self-signed cert in-memory
	I0930 11:30:46.957209       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0930 11:30:46.957296       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:30:46.962662       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0930 11:30:46.963278       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:30:46.963571       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:30:46.963743       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0930 11:31:21.471526       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [447147b39349f864379a88335fc7ff31baa99e5d6da5823f73dc59f2ec4ce6a5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:31:29.611028       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:31:29.650081       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E0930 11:31:29.650432       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:31:29.730719       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:31:29.730781       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:31:29.730811       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:31:29.734900       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:31:29.735864       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:31:29.735899       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:31:29.738688       1 config.go:199] "Starting service config controller"
	I0930 11:31:29.738986       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:31:29.739407       1 config.go:328] "Starting node config controller"
	I0930 11:31:29.739433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:31:29.739913       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:31:29.743750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:31:29.743822       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:31:29.840409       1 shared_informer.go:320] Caches are synced for node config
	I0930 11:31:29.840462       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [fd2ffaa7ff33d47983dc89a84a3bf661b2c530c89896c458243a7c74cf0f1b40] <==
	W0930 11:31:21.480661       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 11:31:21.480791       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0930 11:31:23.035263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:34:03.144301       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-gmtjt\": pod kube-proxy-gmtjt is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-gmtjt" node="ha-033260-m05"
	E0930 11:34:03.147570       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 73f4f491-bf2e-4e17-a8b4-b0908b01186a(kube-system/kube-proxy-gmtjt) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-gmtjt"
	E0930 11:34:03.151193       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-gmtjt\": pod kube-proxy-gmtjt is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-gmtjt"
	I0930 11:34:03.153223       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-gmtjt" node="ha-033260-m05"
	E0930 11:34:03.153592       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-knb8g\": pod kindnet-knb8g is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-knb8g" node="ha-033260-m05"
	E0930 11:34:03.157433       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 3a889bb1-8d4f-409c-956d-1dfc1466b1c4(kube-system/kindnet-knb8g) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-knb8g"
	E0930 11:34:03.157542       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-knb8g\": pod kindnet-knb8g is already assigned to node \"ha-033260-m05\"" pod="kube-system/kindnet-knb8g"
	I0930 11:34:03.157610       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-knb8g" node="ha-033260-m05"
	E0930 11:34:03.147155       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z97cm\": pod kube-proxy-z97cm is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z97cm" node="ha-033260-m05"
	E0930 11:34:03.157746       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 15112029-30e1-4a61-a241-dfbb2dab99e9(kube-system/kube-proxy-z97cm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z97cm"
	E0930 11:34:03.157754       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z97cm\": pod kube-proxy-z97cm is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-z97cm"
	I0930 11:34:03.157815       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z97cm" node="ha-033260-m05"
	E0930 11:34:05.465774       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-6ddjb\": pod kube-proxy-6ddjb is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-6ddjb" node="ha-033260-m05"
	E0930 11:34:05.465973       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-6ddjb\": pod kube-proxy-6ddjb is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-6ddjb"
	E0930 11:34:05.467782       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-kp6dd\": pod kube-proxy-kp6dd is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-kp6dd" node="ha-033260-m05"
	E0930 11:34:05.467827       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1ab922ab-22f6-4421-be70-d9d33fb156f7(kube-system/kube-proxy-kp6dd) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-kp6dd"
	E0930 11:34:05.467846       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kp6dd\": pod kube-proxy-kp6dd is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-kp6dd"
	I0930 11:34:05.467865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kp6dd" node="ha-033260-m05"
	E0930 11:34:05.468295       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lnh54\": pod kube-proxy-lnh54 is already assigned to node \"ha-033260-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lnh54" node="ha-033260-m05"
	E0930 11:34:05.468372       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0db280d3-c9b7-4101-a094-2d3ab3b46285(kube-system/kube-proxy-lnh54) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lnh54"
	E0930 11:34:05.468394       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lnh54\": pod kube-proxy-lnh54 is already assigned to node \"ha-033260-m05\"" pod="kube-system/kube-proxy-lnh54"
	I0930 11:34:05.468417       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lnh54" node="ha-033260-m05"
	
	
	==> kubelet <==
	Sep 30 11:33:28 ha-033260 kubelet[1140]: E0930 11:33:28.081579    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696008080579211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:38 ha-033260 kubelet[1140]: E0930 11:33:38.056742    1140 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:33:38 ha-033260 kubelet[1140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:33:38 ha-033260 kubelet[1140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:33:38 ha-033260 kubelet[1140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:33:38 ha-033260 kubelet[1140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:33:38 ha-033260 kubelet[1140]: E0930 11:33:38.083473    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696018083201135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:38 ha-033260 kubelet[1140]: E0930 11:33:38.083500    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696018083201135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:48 ha-033260 kubelet[1140]: E0930 11:33:48.086188    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696028084875013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:48 ha-033260 kubelet[1140]: E0930 11:33:48.086221    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696028084875013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:58 ha-033260 kubelet[1140]: E0930 11:33:58.088121    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696038087738311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:33:58 ha-033260 kubelet[1140]: E0930 11:33:58.088151    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696038087738311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:08 ha-033260 kubelet[1140]: E0930 11:34:08.092772    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696048092260165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:08 ha-033260 kubelet[1140]: E0930 11:34:08.092834    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696048092260165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:18 ha-033260 kubelet[1140]: E0930 11:34:18.095537    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696058094945164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:18 ha-033260 kubelet[1140]: E0930 11:34:18.095643    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696058094945164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:28 ha-033260 kubelet[1140]: E0930 11:34:28.099890    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696068097917010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:28 ha-033260 kubelet[1140]: E0930 11:34:28.100222    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696068097917010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:38 ha-033260 kubelet[1140]: E0930 11:34:38.058877    1140 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:34:38 ha-033260 kubelet[1140]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:34:38 ha-033260 kubelet[1140]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:34:38 ha-033260 kubelet[1140]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:34:38 ha-033260 kubelet[1140]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:34:38 ha-033260 kubelet[1140]: E0930 11:34:38.101699    1140 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696078101390517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:34:38 ha-033260 kubelet[1140]: E0930 11:34:38.101747    1140 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696078101390517,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-033260 -n ha-033260
helpers_test.go:261: (dbg) Run:  kubectl --context ha-033260 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-457103
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-457103
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-457103: exit status 82 (2m1.85750007s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-457103-m03"  ...
	* Stopping node "multinode-457103-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-457103" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-457103 --wait=true -v=8 --alsologtostderr
E0930 11:45:18.067086   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-457103 --wait=true -v=8 --alsologtostderr: (3m21.291822279s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-457103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-457103 -n multinode-457103
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-457103 logs -n 25: (1.596410455s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile377977775/001/cp-test_multinode-457103-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103:/home/docker/cp-test_multinode-457103-m02_multinode-457103.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103 sudo cat                                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m02_multinode-457103.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03:/home/docker/cp-test_multinode-457103-m02_multinode-457103-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103-m03 sudo cat                                   | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m02_multinode-457103-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp testdata/cp-test.txt                                                | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile377977775/001/cp-test_multinode-457103-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103:/home/docker/cp-test_multinode-457103-m03_multinode-457103.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103 sudo cat                                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m03_multinode-457103.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02:/home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103-m02 sudo cat                                   | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-457103 node stop m03                                                          | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	| node    | multinode-457103 node start                                                             | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-457103                                                                | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC |                     |
	| stop    | -p multinode-457103                                                                     | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC |                     |
	| start   | -p multinode-457103                                                                     | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:44 UTC | 30 Sep 24 11:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-457103                                                                | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:44:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:44:56.700343   45440 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:44:56.700570   45440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:44:56.700578   45440 out.go:358] Setting ErrFile to fd 2...
	I0930 11:44:56.700583   45440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:44:56.700770   45440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:44:56.701309   45440 out.go:352] Setting JSON to false
	I0930 11:44:56.702228   45440 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5244,"bootTime":1727691453,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:44:56.702328   45440 start.go:139] virtualization: kvm guest
	I0930 11:44:56.704755   45440 out.go:177] * [multinode-457103] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:44:56.706021   45440 notify.go:220] Checking for updates...
	I0930 11:44:56.706051   45440 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:44:56.707423   45440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:44:56.708732   45440 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:44:56.709818   45440 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:44:56.710914   45440 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:44:56.712057   45440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:44:56.713666   45440 config.go:182] Loaded profile config "multinode-457103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:44:56.713772   45440 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:44:56.714251   45440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:44:56.714308   45440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:44:56.729481   45440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0930 11:44:56.729948   45440 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:44:56.730506   45440 main.go:141] libmachine: Using API Version  1
	I0930 11:44:56.730528   45440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:44:56.730896   45440 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:44:56.731091   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:44:56.767348   45440 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:44:56.768735   45440 start.go:297] selected driver: kvm2
	I0930 11:44:56.768750   45440 start.go:901] validating driver "kvm2" against &{Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:44:56.768939   45440 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:44:56.769337   45440 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:44:56.769429   45440 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:44:56.785234   45440 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:44:56.785969   45440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:44:56.786005   45440 cni.go:84] Creating CNI manager for ""
	I0930 11:44:56.786062   45440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 11:44:56.786132   45440 start.go:340] cluster config:
	{Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-457103 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:44:56.786264   45440 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:44:56.787964   45440 out.go:177] * Starting "multinode-457103" primary control-plane node in "multinode-457103" cluster
	I0930 11:44:56.789500   45440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:44:56.789556   45440 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:44:56.789564   45440 cache.go:56] Caching tarball of preloaded images
	I0930 11:44:56.789665   45440 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:44:56.789677   45440 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:44:56.789798   45440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/config.json ...
	I0930 11:44:56.789984   45440 start.go:360] acquireMachinesLock for multinode-457103: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:44:56.790023   45440 start.go:364] duration metric: took 22.945µs to acquireMachinesLock for "multinode-457103"
	I0930 11:44:56.790043   45440 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:44:56.790051   45440 fix.go:54] fixHost starting: 
	I0930 11:44:56.790295   45440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:44:56.790326   45440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:44:56.805203   45440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0930 11:44:56.805738   45440 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:44:56.806308   45440 main.go:141] libmachine: Using API Version  1
	I0930 11:44:56.806333   45440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:44:56.806739   45440 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:44:56.806945   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:44:56.807117   45440 main.go:141] libmachine: (multinode-457103) Calling .GetState
	I0930 11:44:56.808801   45440 fix.go:112] recreateIfNeeded on multinode-457103: state=Running err=<nil>
	W0930 11:44:56.808820   45440 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:44:56.810831   45440 out.go:177] * Updating the running kvm2 "multinode-457103" VM ...
	I0930 11:44:56.811978   45440 machine.go:93] provisionDockerMachine start ...
	I0930 11:44:56.812008   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:44:56.812255   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:56.815815   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.816428   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:56.816458   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.816706   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:56.816915   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.817089   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.817252   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:56.817419   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:56.817680   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:56.817696   45440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:44:56.924097   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-457103
	
	I0930 11:44:56.924131   45440 main.go:141] libmachine: (multinode-457103) Calling .GetMachineName
	I0930 11:44:56.924385   45440 buildroot.go:166] provisioning hostname "multinode-457103"
	I0930 11:44:56.924414   45440 main.go:141] libmachine: (multinode-457103) Calling .GetMachineName
	I0930 11:44:56.924608   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:56.927185   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.927579   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:56.927618   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.927766   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:56.927928   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.928079   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.928168   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:56.928285   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:56.928487   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:56.928512   45440 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-457103 && echo "multinode-457103" | sudo tee /etc/hostname
	I0930 11:44:57.042905   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-457103
	
	I0930 11:44:57.042931   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.045844   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.046220   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.046239   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.046468   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:57.046671   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.046836   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.046963   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:57.047102   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:57.047315   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:57.047334   45440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-457103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-457103/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-457103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:44:57.150747   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:44:57.150778   45440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:44:57.150819   45440 buildroot.go:174] setting up certificates
	I0930 11:44:57.150829   45440 provision.go:84] configureAuth start
	I0930 11:44:57.150838   45440 main.go:141] libmachine: (multinode-457103) Calling .GetMachineName
	I0930 11:44:57.151079   45440 main.go:141] libmachine: (multinode-457103) Calling .GetIP
	I0930 11:44:57.153936   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.154257   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.154285   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.154420   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.156439   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.156922   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.156948   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.157069   45440 provision.go:143] copyHostCerts
	I0930 11:44:57.157094   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:44:57.157126   45440 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:44:57.157135   45440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:44:57.157201   45440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:44:57.157290   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:44:57.157315   45440 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:44:57.157322   45440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:44:57.157350   45440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:44:57.157407   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:44:57.157423   45440 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:44:57.157430   45440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:44:57.157451   45440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:44:57.157495   45440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.multinode-457103 san=[127.0.0.1 192.168.39.219 localhost minikube multinode-457103]
	I0930 11:44:57.354081   45440 provision.go:177] copyRemoteCerts
	I0930 11:44:57.354140   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:44:57.354164   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.356892   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.357297   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.357327   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.357509   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:57.357716   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.357884   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:57.357999   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:44:57.443016   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:44:57.443095   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:44:57.471080   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:44:57.471181   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0930 11:44:57.497998   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:44:57.498084   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:44:57.524475   45440 provision.go:87] duration metric: took 373.631513ms to configureAuth
	I0930 11:44:57.524507   45440 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:44:57.524747   45440 config.go:182] Loaded profile config "multinode-457103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:44:57.524832   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.527330   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.527724   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.527745   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.527916   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:57.528101   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.528232   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.528414   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:57.528554   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:57.528750   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:57.528771   45440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:46:28.300738   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:46:28.300766   45440 machine.go:96] duration metric: took 1m31.488768595s to provisionDockerMachine
	I0930 11:46:28.300780   45440 start.go:293] postStartSetup for "multinode-457103" (driver="kvm2")
	I0930 11:46:28.300794   45440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:46:28.300814   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.301128   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:46:28.301155   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.304242   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.304638   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.304663   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.304831   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.305010   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.305195   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.305305   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:46:28.385486   45440 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:46:28.390398   45440 command_runner.go:130] > NAME=Buildroot
	I0930 11:46:28.390420   45440 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0930 11:46:28.390432   45440 command_runner.go:130] > ID=buildroot
	I0930 11:46:28.390437   45440 command_runner.go:130] > VERSION_ID=2023.02.9
	I0930 11:46:28.390444   45440 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0930 11:46:28.390494   45440 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:46:28.390517   45440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:46:28.390576   45440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:46:28.390651   45440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:46:28.390661   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:46:28.390739   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:46:28.400587   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:46:28.424850   45440 start.go:296] duration metric: took 124.054082ms for postStartSetup
	I0930 11:46:28.424913   45440 fix.go:56] duration metric: took 1m31.63485055s for fixHost
	I0930 11:46:28.424943   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.427593   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.428042   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.428095   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.428196   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.428372   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.428556   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.428672   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.428825   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:46:28.429022   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:46:28.429037   45440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:46:28.534858   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727696788.509603613
	
	I0930 11:46:28.534886   45440 fix.go:216] guest clock: 1727696788.509603613
	I0930 11:46:28.534896   45440 fix.go:229] Guest: 2024-09-30 11:46:28.509603613 +0000 UTC Remote: 2024-09-30 11:46:28.424918658 +0000 UTC m=+91.761087374 (delta=84.684955ms)
	I0930 11:46:28.534927   45440 fix.go:200] guest clock delta is within tolerance: 84.684955ms
	I0930 11:46:28.534932   45440 start.go:83] releasing machines lock for "multinode-457103", held for 1m31.744900385s
	I0930 11:46:28.534957   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.535206   45440 main.go:141] libmachine: (multinode-457103) Calling .GetIP
	I0930 11:46:28.538073   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.538447   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.538477   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.538663   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.539323   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.539489   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.539580   45440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:46:28.539637   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.539704   45440 ssh_runner.go:195] Run: cat /version.json
	I0930 11:46:28.539727   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.542318   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.542782   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.542816   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.542837   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.542919   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.543075   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.543217   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.543231   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.543244   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.543383   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.543381   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:46:28.543489   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.543592   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.543674   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:46:28.618518   45440 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0930 11:46:28.618852   45440 ssh_runner.go:195] Run: systemctl --version
	I0930 11:46:28.645477   45440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0930 11:46:28.645574   45440 command_runner.go:130] > systemd 252 (252)
	I0930 11:46:28.645596   45440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0930 11:46:28.645676   45440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:46:28.803395   45440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 11:46:28.813419   45440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0930 11:46:28.813844   45440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:46:28.813919   45440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:46:28.824264   45440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:46:28.824304   45440 start.go:495] detecting cgroup driver to use...
	I0930 11:46:28.824373   45440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:46:28.842439   45440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:46:28.858154   45440 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:46:28.858226   45440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:46:28.873826   45440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:46:28.889969   45440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:46:29.046126   45440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:46:29.199394   45440 docker.go:233] disabling docker service ...
	I0930 11:46:29.199471   45440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:46:29.217688   45440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:46:29.232581   45440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:46:29.391092   45440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:46:29.540010   45440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:46:29.554384   45440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:46:29.575499   45440 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0930 11:46:29.575540   45440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:46:29.575588   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.586819   45440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:46:29.586878   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.597945   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.608724   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.619764   45440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:46:29.630871   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.641786   45440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.653373   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.663886   45440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:46:29.673739   45440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0930 11:46:29.673815   45440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:46:29.683764   45440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:46:29.824704   45440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:46:30.021787   45440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:46:30.021850   45440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:46:30.027254   45440 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0930 11:46:30.027294   45440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0930 11:46:30.027304   45440 command_runner.go:130] > Device: 0,22	Inode: 1305        Links: 1
	I0930 11:46:30.027313   45440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 11:46:30.027321   45440 command_runner.go:130] > Access: 2024-09-30 11:46:29.890850592 +0000
	I0930 11:46:30.027330   45440 command_runner.go:130] > Modify: 2024-09-30 11:46:29.890850592 +0000
	I0930 11:46:30.027339   45440 command_runner.go:130] > Change: 2024-09-30 11:46:29.890850592 +0000
	I0930 11:46:30.027348   45440 command_runner.go:130] >  Birth: -
	I0930 11:46:30.027373   45440 start.go:563] Will wait 60s for crictl version
	I0930 11:46:30.027442   45440 ssh_runner.go:195] Run: which crictl
	I0930 11:46:30.031915   45440 command_runner.go:130] > /usr/bin/crictl
	I0930 11:46:30.032089   45440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:46:30.077747   45440 command_runner.go:130] > Version:  0.1.0
	I0930 11:46:30.077804   45440 command_runner.go:130] > RuntimeName:  cri-o
	I0930 11:46:30.077813   45440 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0930 11:46:30.077822   45440 command_runner.go:130] > RuntimeApiVersion:  v1
	I0930 11:46:30.077915   45440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:46:30.077974   45440 ssh_runner.go:195] Run: crio --version
	I0930 11:46:30.107067   45440 command_runner.go:130] > crio version 1.29.1
	I0930 11:46:30.107091   45440 command_runner.go:130] > Version:        1.29.1
	I0930 11:46:30.107098   45440 command_runner.go:130] > GitCommit:      unknown
	I0930 11:46:30.107102   45440 command_runner.go:130] > GitCommitDate:  unknown
	I0930 11:46:30.107106   45440 command_runner.go:130] > GitTreeState:   clean
	I0930 11:46:30.107112   45440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 11:46:30.107116   45440 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 11:46:30.107120   45440 command_runner.go:130] > Compiler:       gc
	I0930 11:46:30.107128   45440 command_runner.go:130] > Platform:       linux/amd64
	I0930 11:46:30.107132   45440 command_runner.go:130] > Linkmode:       dynamic
	I0930 11:46:30.107136   45440 command_runner.go:130] > BuildTags:      
	I0930 11:46:30.107140   45440 command_runner.go:130] >   containers_image_ostree_stub
	I0930 11:46:30.107162   45440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 11:46:30.107166   45440 command_runner.go:130] >   btrfs_noversion
	I0930 11:46:30.107171   45440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 11:46:30.107175   45440 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 11:46:30.107179   45440 command_runner.go:130] >   seccomp
	I0930 11:46:30.107188   45440 command_runner.go:130] > LDFlags:          unknown
	I0930 11:46:30.107194   45440 command_runner.go:130] > SeccompEnabled:   true
	I0930 11:46:30.107198   45440 command_runner.go:130] > AppArmorEnabled:  false
	I0930 11:46:30.108525   45440 ssh_runner.go:195] Run: crio --version
	I0930 11:46:30.137467   45440 command_runner.go:130] > crio version 1.29.1
	I0930 11:46:30.137489   45440 command_runner.go:130] > Version:        1.29.1
	I0930 11:46:30.137495   45440 command_runner.go:130] > GitCommit:      unknown
	I0930 11:46:30.137499   45440 command_runner.go:130] > GitCommitDate:  unknown
	I0930 11:46:30.137503   45440 command_runner.go:130] > GitTreeState:   clean
	I0930 11:46:30.137509   45440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 11:46:30.137513   45440 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 11:46:30.137516   45440 command_runner.go:130] > Compiler:       gc
	I0930 11:46:30.137521   45440 command_runner.go:130] > Platform:       linux/amd64
	I0930 11:46:30.137525   45440 command_runner.go:130] > Linkmode:       dynamic
	I0930 11:46:30.137535   45440 command_runner.go:130] > BuildTags:      
	I0930 11:46:30.137539   45440 command_runner.go:130] >   containers_image_ostree_stub
	I0930 11:46:30.137543   45440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 11:46:30.137547   45440 command_runner.go:130] >   btrfs_noversion
	I0930 11:46:30.137552   45440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 11:46:30.137556   45440 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 11:46:30.137559   45440 command_runner.go:130] >   seccomp
	I0930 11:46:30.137563   45440 command_runner.go:130] > LDFlags:          unknown
	I0930 11:46:30.137598   45440 command_runner.go:130] > SeccompEnabled:   true
	I0930 11:46:30.137607   45440 command_runner.go:130] > AppArmorEnabled:  false
	I0930 11:46:30.139671   45440 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:46:30.141009   45440 main.go:141] libmachine: (multinode-457103) Calling .GetIP
	I0930 11:46:30.143641   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:30.143980   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:30.144013   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:30.144204   45440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:46:30.148767   45440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0930 11:46:30.148883   45440 kubeadm.go:883] updating cluster {Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:46:30.149017   45440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:46:30.149084   45440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:46:30.191423   45440 command_runner.go:130] > {
	I0930 11:46:30.191452   45440 command_runner.go:130] >   "images": [
	I0930 11:46:30.191459   45440 command_runner.go:130] >     {
	I0930 11:46:30.191473   45440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 11:46:30.191481   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191490   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 11:46:30.191496   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191502   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191515   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 11:46:30.191530   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 11:46:30.191538   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191551   45440 command_runner.go:130] >       "size": "87190579",
	I0930 11:46:30.191558   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191564   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.191571   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191578   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191584   45440 command_runner.go:130] >     },
	I0930 11:46:30.191590   45440 command_runner.go:130] >     {
	I0930 11:46:30.191600   45440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 11:46:30.191607   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191614   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 11:46:30.191617   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191622   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191629   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 11:46:30.191636   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 11:46:30.191641   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191645   45440 command_runner.go:130] >       "size": "1363676",
	I0930 11:46:30.191650   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191663   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.191670   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191678   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191684   45440 command_runner.go:130] >     },
	I0930 11:46:30.191690   45440 command_runner.go:130] >     {
	I0930 11:46:30.191700   45440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 11:46:30.191709   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191717   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 11:46:30.191725   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191731   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191745   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 11:46:30.191758   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 11:46:30.191767   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191774   45440 command_runner.go:130] >       "size": "31470524",
	I0930 11:46:30.191782   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191792   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.191799   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191805   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191812   45440 command_runner.go:130] >     },
	I0930 11:46:30.191817   45440 command_runner.go:130] >     {
	I0930 11:46:30.191831   45440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 11:46:30.191840   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191851   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 11:46:30.191862   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191871   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191886   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 11:46:30.191902   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 11:46:30.191909   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191913   45440 command_runner.go:130] >       "size": "63273227",
	I0930 11:46:30.191917   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191923   45440 command_runner.go:130] >       "username": "nonroot",
	I0930 11:46:30.191927   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191935   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191939   45440 command_runner.go:130] >     },
	I0930 11:46:30.191943   45440 command_runner.go:130] >     {
	I0930 11:46:30.191948   45440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 11:46:30.191955   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191960   45440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 11:46:30.191965   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191970   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191979   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 11:46:30.192020   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 11:46:30.192030   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192035   45440 command_runner.go:130] >       "size": "149009664",
	I0930 11:46:30.192038   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192043   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192046   45440 command_runner.go:130] >       },
	I0930 11:46:30.192050   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192055   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192059   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192063   45440 command_runner.go:130] >     },
	I0930 11:46:30.192068   45440 command_runner.go:130] >     {
	I0930 11:46:30.192074   45440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 11:46:30.192080   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192085   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 11:46:30.192091   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192095   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192104   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 11:46:30.192113   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 11:46:30.192119   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192124   45440 command_runner.go:130] >       "size": "95237600",
	I0930 11:46:30.192129   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192133   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192138   45440 command_runner.go:130] >       },
	I0930 11:46:30.192143   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192152   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192160   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192168   45440 command_runner.go:130] >     },
	I0930 11:46:30.192177   45440 command_runner.go:130] >     {
	I0930 11:46:30.192189   45440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 11:46:30.192199   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192207   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 11:46:30.192215   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192221   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192236   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 11:46:30.192250   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 11:46:30.192256   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192260   45440 command_runner.go:130] >       "size": "89437508",
	I0930 11:46:30.192266   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192270   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192276   45440 command_runner.go:130] >       },
	I0930 11:46:30.192280   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192286   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192290   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192298   45440 command_runner.go:130] >     },
	I0930 11:46:30.192302   45440 command_runner.go:130] >     {
	I0930 11:46:30.192310   45440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 11:46:30.192316   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192321   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 11:46:30.192327   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192331   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192348   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 11:46:30.192357   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 11:46:30.192363   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192367   45440 command_runner.go:130] >       "size": "92733849",
	I0930 11:46:30.192373   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.192377   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192383   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192387   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192390   45440 command_runner.go:130] >     },
	I0930 11:46:30.192393   45440 command_runner.go:130] >     {
	I0930 11:46:30.192399   45440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 11:46:30.192402   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192407   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 11:46:30.192410   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192413   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192421   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 11:46:30.192429   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 11:46:30.192432   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192436   45440 command_runner.go:130] >       "size": "68420934",
	I0930 11:46:30.192439   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192443   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192446   45440 command_runner.go:130] >       },
	I0930 11:46:30.192450   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192454   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192458   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192461   45440 command_runner.go:130] >     },
	I0930 11:46:30.192464   45440 command_runner.go:130] >     {
	I0930 11:46:30.192470   45440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 11:46:30.192473   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192477   45440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 11:46:30.192481   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192484   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192490   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 11:46:30.192497   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 11:46:30.192502   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192506   45440 command_runner.go:130] >       "size": "742080",
	I0930 11:46:30.192512   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192516   45440 command_runner.go:130] >         "value": "65535"
	I0930 11:46:30.192522   45440 command_runner.go:130] >       },
	I0930 11:46:30.192525   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192531   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192535   45440 command_runner.go:130] >       "pinned": true
	I0930 11:46:30.192540   45440 command_runner.go:130] >     }
	I0930 11:46:30.192549   45440 command_runner.go:130] >   ]
	I0930 11:46:30.192554   45440 command_runner.go:130] > }
	I0930 11:46:30.192714   45440 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:46:30.192724   45440 crio.go:433] Images already preloaded, skipping extraction
	I0930 11:46:30.192764   45440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:46:30.227616   45440 command_runner.go:130] > {
	I0930 11:46:30.227646   45440 command_runner.go:130] >   "images": [
	I0930 11:46:30.227651   45440 command_runner.go:130] >     {
	I0930 11:46:30.227663   45440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 11:46:30.227670   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.227678   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 11:46:30.227682   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227687   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.227699   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 11:46:30.227710   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 11:46:30.227717   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227724   45440 command_runner.go:130] >       "size": "87190579",
	I0930 11:46:30.227732   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.227743   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.227754   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.227760   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.227767   45440 command_runner.go:130] >     },
	I0930 11:46:30.227773   45440 command_runner.go:130] >     {
	I0930 11:46:30.227784   45440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 11:46:30.227793   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.227801   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 11:46:30.227807   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227814   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.227827   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 11:46:30.227840   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 11:46:30.227849   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227857   45440 command_runner.go:130] >       "size": "1363676",
	I0930 11:46:30.227866   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.227880   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.227890   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.227897   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.227903   45440 command_runner.go:130] >     },
	I0930 11:46:30.227910   45440 command_runner.go:130] >     {
	I0930 11:46:30.227920   45440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 11:46:30.227929   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.227939   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 11:46:30.227948   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227955   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.227971   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 11:46:30.227987   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 11:46:30.227995   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228003   45440 command_runner.go:130] >       "size": "31470524",
	I0930 11:46:30.228013   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.228021   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228031   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228039   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228047   45440 command_runner.go:130] >     },
	I0930 11:46:30.228054   45440 command_runner.go:130] >     {
	I0930 11:46:30.228068   45440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 11:46:30.228081   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228095   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 11:46:30.228104   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228111   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228126   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 11:46:30.228146   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 11:46:30.228155   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228163   45440 command_runner.go:130] >       "size": "63273227",
	I0930 11:46:30.228178   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.228187   45440 command_runner.go:130] >       "username": "nonroot",
	I0930 11:46:30.228201   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228210   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228217   45440 command_runner.go:130] >     },
	I0930 11:46:30.228225   45440 command_runner.go:130] >     {
	I0930 11:46:30.228236   45440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 11:46:30.228245   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228253   45440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 11:46:30.228263   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228271   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228286   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 11:46:30.228300   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 11:46:30.228309   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228317   45440 command_runner.go:130] >       "size": "149009664",
	I0930 11:46:30.228325   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228333   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228341   45440 command_runner.go:130] >       },
	I0930 11:46:30.228349   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228357   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228365   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228372   45440 command_runner.go:130] >     },
	I0930 11:46:30.228378   45440 command_runner.go:130] >     {
	I0930 11:46:30.228406   45440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 11:46:30.228414   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228422   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 11:46:30.228429   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228438   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228452   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 11:46:30.228465   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 11:46:30.228473   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228481   45440 command_runner.go:130] >       "size": "95237600",
	I0930 11:46:30.228491   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228500   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228507   45440 command_runner.go:130] >       },
	I0930 11:46:30.228515   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228525   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228534   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228541   45440 command_runner.go:130] >     },
	I0930 11:46:30.228548   45440 command_runner.go:130] >     {
	I0930 11:46:30.228561   45440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 11:46:30.228570   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228579   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 11:46:30.228589   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228598   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228617   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 11:46:30.228636   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 11:46:30.228645   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228653   45440 command_runner.go:130] >       "size": "89437508",
	I0930 11:46:30.228662   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228668   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228678   45440 command_runner.go:130] >       },
	I0930 11:46:30.228686   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228695   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228702   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228710   45440 command_runner.go:130] >     },
	I0930 11:46:30.228717   45440 command_runner.go:130] >     {
	I0930 11:46:30.228731   45440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 11:46:30.228740   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228747   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 11:46:30.228752   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228758   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228778   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 11:46:30.228794   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 11:46:30.228802   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228810   45440 command_runner.go:130] >       "size": "92733849",
	I0930 11:46:30.228819   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.228827   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228836   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228844   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228852   45440 command_runner.go:130] >     },
	I0930 11:46:30.228859   45440 command_runner.go:130] >     {
	I0930 11:46:30.228871   45440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 11:46:30.228880   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228889   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 11:46:30.228899   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228908   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228923   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 11:46:30.228938   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 11:46:30.228948   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228956   45440 command_runner.go:130] >       "size": "68420934",
	I0930 11:46:30.228965   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228973   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228981   45440 command_runner.go:130] >       },
	I0930 11:46:30.228988   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228998   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.229007   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.229015   45440 command_runner.go:130] >     },
	I0930 11:46:30.229021   45440 command_runner.go:130] >     {
	I0930 11:46:30.229035   45440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 11:46:30.229044   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.229066   45440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 11:46:30.229075   45440 command_runner.go:130] >       ],
	I0930 11:46:30.229083   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.229130   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 11:46:30.229148   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 11:46:30.229155   45440 command_runner.go:130] >       ],
	I0930 11:46:30.229165   45440 command_runner.go:130] >       "size": "742080",
	I0930 11:46:30.229172   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.229181   45440 command_runner.go:130] >         "value": "65535"
	I0930 11:46:30.229190   45440 command_runner.go:130] >       },
	I0930 11:46:30.229214   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.229224   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.229231   45440 command_runner.go:130] >       "pinned": true
	I0930 11:46:30.229239   45440 command_runner.go:130] >     }
	I0930 11:46:30.229245   45440 command_runner.go:130] >   ]
	I0930 11:46:30.229253   45440 command_runner.go:130] > }
	I0930 11:46:30.229374   45440 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:46:30.229385   45440 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:46:30.229397   45440 kubeadm.go:934] updating node { 192.168.39.219 8443 v1.31.1 crio true true} ...
	I0930 11:46:30.229511   45440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-457103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:46:30.229588   45440 ssh_runner.go:195] Run: crio config
	I0930 11:46:30.267837   45440 command_runner.go:130] ! time="2024-09-30 11:46:30.242675717Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0930 11:46:30.273770   45440 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0930 11:46:30.281119   45440 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0930 11:46:30.281150   45440 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0930 11:46:30.281160   45440 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0930 11:46:30.281166   45440 command_runner.go:130] > #
	I0930 11:46:30.281177   45440 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0930 11:46:30.281186   45440 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0930 11:46:30.281195   45440 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0930 11:46:30.281208   45440 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0930 11:46:30.281218   45440 command_runner.go:130] > # reload'.
	I0930 11:46:30.281228   45440 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0930 11:46:30.281241   45440 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0930 11:46:30.281252   45440 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0930 11:46:30.281265   45440 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0930 11:46:30.281273   45440 command_runner.go:130] > [crio]
	I0930 11:46:30.281282   45440 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0930 11:46:30.281293   45440 command_runner.go:130] > # containers images, in this directory.
	I0930 11:46:30.281308   45440 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0930 11:46:30.281324   45440 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0930 11:46:30.281335   45440 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0930 11:46:30.281347   45440 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0930 11:46:30.281354   45440 command_runner.go:130] > # imagestore = ""
	I0930 11:46:30.281360   45440 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0930 11:46:30.281369   45440 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0930 11:46:30.281376   45440 command_runner.go:130] > storage_driver = "overlay"
	I0930 11:46:30.281381   45440 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0930 11:46:30.281389   45440 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0930 11:46:30.281395   45440 command_runner.go:130] > storage_option = [
	I0930 11:46:30.281400   45440 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0930 11:46:30.281405   45440 command_runner.go:130] > ]
	I0930 11:46:30.281412   45440 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0930 11:46:30.281420   45440 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0930 11:46:30.281429   45440 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0930 11:46:30.281436   45440 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0930 11:46:30.281442   45440 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0930 11:46:30.281448   45440 command_runner.go:130] > # always happen on a node reboot
	I0930 11:46:30.281453   45440 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0930 11:46:30.281463   45440 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0930 11:46:30.281471   45440 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0930 11:46:30.281478   45440 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0930 11:46:30.281483   45440 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0930 11:46:30.281493   45440 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0930 11:46:30.281502   45440 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0930 11:46:30.281508   45440 command_runner.go:130] > # internal_wipe = true
	I0930 11:46:30.281516   45440 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0930 11:46:30.281522   45440 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0930 11:46:30.281527   45440 command_runner.go:130] > # internal_repair = false
	I0930 11:46:30.281532   45440 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0930 11:46:30.281539   45440 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0930 11:46:30.281546   45440 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0930 11:46:30.281554   45440 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0930 11:46:30.281560   45440 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0930 11:46:30.281566   45440 command_runner.go:130] > [crio.api]
	I0930 11:46:30.281572   45440 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0930 11:46:30.281579   45440 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0930 11:46:30.281584   45440 command_runner.go:130] > # IP address on which the stream server will listen.
	I0930 11:46:30.281590   45440 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0930 11:46:30.281597   45440 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0930 11:46:30.281604   45440 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0930 11:46:30.281607   45440 command_runner.go:130] > # stream_port = "0"
	I0930 11:46:30.281628   45440 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0930 11:46:30.281635   45440 command_runner.go:130] > # stream_enable_tls = false
	I0930 11:46:30.281647   45440 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0930 11:46:30.281651   45440 command_runner.go:130] > # stream_idle_timeout = ""
	I0930 11:46:30.281660   45440 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0930 11:46:30.281667   45440 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0930 11:46:30.281673   45440 command_runner.go:130] > # minutes.
	I0930 11:46:30.281677   45440 command_runner.go:130] > # stream_tls_cert = ""
	I0930 11:46:30.281685   45440 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0930 11:46:30.281696   45440 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0930 11:46:30.281702   45440 command_runner.go:130] > # stream_tls_key = ""
	I0930 11:46:30.281708   45440 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0930 11:46:30.281716   45440 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0930 11:46:30.281733   45440 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0930 11:46:30.281739   45440 command_runner.go:130] > # stream_tls_ca = ""
	I0930 11:46:30.281747   45440 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 11:46:30.281754   45440 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0930 11:46:30.281760   45440 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 11:46:30.281767   45440 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0930 11:46:30.281773   45440 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0930 11:46:30.281782   45440 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0930 11:46:30.281788   45440 command_runner.go:130] > [crio.runtime]
	I0930 11:46:30.281796   45440 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0930 11:46:30.281804   45440 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0930 11:46:30.281808   45440 command_runner.go:130] > # "nofile=1024:2048"
	I0930 11:46:30.281814   45440 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0930 11:46:30.281820   45440 command_runner.go:130] > # default_ulimits = [
	I0930 11:46:30.281824   45440 command_runner.go:130] > # ]
	I0930 11:46:30.281832   45440 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0930 11:46:30.281837   45440 command_runner.go:130] > # no_pivot = false
	I0930 11:46:30.281842   45440 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0930 11:46:30.281850   45440 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0930 11:46:30.281856   45440 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0930 11:46:30.281863   45440 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0930 11:46:30.281868   45440 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0930 11:46:30.281876   45440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 11:46:30.281883   45440 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0930 11:46:30.281887   45440 command_runner.go:130] > # Cgroup setting for conmon
	I0930 11:46:30.281897   45440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0930 11:46:30.281904   45440 command_runner.go:130] > conmon_cgroup = "pod"
	I0930 11:46:30.281910   45440 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0930 11:46:30.281917   45440 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0930 11:46:30.281924   45440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 11:46:30.281930   45440 command_runner.go:130] > conmon_env = [
	I0930 11:46:30.281936   45440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 11:46:30.281941   45440 command_runner.go:130] > ]
	I0930 11:46:30.281946   45440 command_runner.go:130] > # Additional environment variables to set for all the
	I0930 11:46:30.281951   45440 command_runner.go:130] > # containers. These are overridden if set in the
	I0930 11:46:30.281959   45440 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0930 11:46:30.281963   45440 command_runner.go:130] > # default_env = [
	I0930 11:46:30.281968   45440 command_runner.go:130] > # ]
	I0930 11:46:30.281974   45440 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0930 11:46:30.281983   45440 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0930 11:46:30.281988   45440 command_runner.go:130] > # selinux = false
	I0930 11:46:30.281994   45440 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0930 11:46:30.282003   45440 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0930 11:46:30.282010   45440 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0930 11:46:30.282015   45440 command_runner.go:130] > # seccomp_profile = ""
	I0930 11:46:30.282022   45440 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0930 11:46:30.282040   45440 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0930 11:46:30.282048   45440 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0930 11:46:30.282055   45440 command_runner.go:130] > # which might increase security.
	I0930 11:46:30.282059   45440 command_runner.go:130] > # This option is currently deprecated,
	I0930 11:46:30.282067   45440 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0930 11:46:30.282072   45440 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0930 11:46:30.282079   45440 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0930 11:46:30.282087   45440 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0930 11:46:30.282096   45440 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0930 11:46:30.282104   45440 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0930 11:46:30.282112   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.282119   45440 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0930 11:46:30.282125   45440 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0930 11:46:30.282131   45440 command_runner.go:130] > # the cgroup blockio controller.
	I0930 11:46:30.282136   45440 command_runner.go:130] > # blockio_config_file = ""
	I0930 11:46:30.282144   45440 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0930 11:46:30.282147   45440 command_runner.go:130] > # blockio parameters.
	I0930 11:46:30.282156   45440 command_runner.go:130] > # blockio_reload = false
	I0930 11:46:30.282166   45440 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0930 11:46:30.282176   45440 command_runner.go:130] > # irqbalance daemon.
	I0930 11:46:30.282183   45440 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0930 11:46:30.282194   45440 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0930 11:46:30.282208   45440 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0930 11:46:30.282220   45440 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0930 11:46:30.282232   45440 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0930 11:46:30.282246   45440 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0930 11:46:30.282254   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.282261   45440 command_runner.go:130] > # rdt_config_file = ""
	I0930 11:46:30.282267   45440 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0930 11:46:30.282275   45440 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0930 11:46:30.282298   45440 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0930 11:46:30.282305   45440 command_runner.go:130] > # separate_pull_cgroup = ""
	I0930 11:46:30.282311   45440 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0930 11:46:30.282319   45440 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0930 11:46:30.282326   45440 command_runner.go:130] > # will be added.
	I0930 11:46:30.282330   45440 command_runner.go:130] > # default_capabilities = [
	I0930 11:46:30.282336   45440 command_runner.go:130] > # 	"CHOWN",
	I0930 11:46:30.282340   45440 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0930 11:46:30.282344   45440 command_runner.go:130] > # 	"FSETID",
	I0930 11:46:30.282349   45440 command_runner.go:130] > # 	"FOWNER",
	I0930 11:46:30.282353   45440 command_runner.go:130] > # 	"SETGID",
	I0930 11:46:30.282359   45440 command_runner.go:130] > # 	"SETUID",
	I0930 11:46:30.282362   45440 command_runner.go:130] > # 	"SETPCAP",
	I0930 11:46:30.282368   45440 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0930 11:46:30.282373   45440 command_runner.go:130] > # 	"KILL",
	I0930 11:46:30.282379   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282387   45440 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0930 11:46:30.282395   45440 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0930 11:46:30.282403   45440 command_runner.go:130] > # add_inheritable_capabilities = false
	I0930 11:46:30.282409   45440 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0930 11:46:30.282417   45440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 11:46:30.282423   45440 command_runner.go:130] > default_sysctls = [
	I0930 11:46:30.282428   45440 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0930 11:46:30.282433   45440 command_runner.go:130] > ]
	I0930 11:46:30.282438   45440 command_runner.go:130] > # List of devices on the host that a
	I0930 11:46:30.282447   45440 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0930 11:46:30.282453   45440 command_runner.go:130] > # allowed_devices = [
	I0930 11:46:30.282457   45440 command_runner.go:130] > # 	"/dev/fuse",
	I0930 11:46:30.282462   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282467   45440 command_runner.go:130] > # List of additional devices. specified as
	I0930 11:46:30.282476   45440 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0930 11:46:30.282484   45440 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0930 11:46:30.282489   45440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 11:46:30.282495   45440 command_runner.go:130] > # additional_devices = [
	I0930 11:46:30.282498   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282504   45440 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0930 11:46:30.282507   45440 command_runner.go:130] > # cdi_spec_dirs = [
	I0930 11:46:30.282513   45440 command_runner.go:130] > # 	"/etc/cdi",
	I0930 11:46:30.282517   45440 command_runner.go:130] > # 	"/var/run/cdi",
	I0930 11:46:30.282520   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282528   45440 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0930 11:46:30.282534   45440 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0930 11:46:30.282540   45440 command_runner.go:130] > # Defaults to false.
	I0930 11:46:30.282547   45440 command_runner.go:130] > # device_ownership_from_security_context = false
	I0930 11:46:30.282556   45440 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0930 11:46:30.282564   45440 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0930 11:46:30.282570   45440 command_runner.go:130] > # hooks_dir = [
	I0930 11:46:30.282574   45440 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0930 11:46:30.282581   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282587   45440 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0930 11:46:30.282596   45440 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0930 11:46:30.282603   45440 command_runner.go:130] > # its default mounts from the following two files:
	I0930 11:46:30.282608   45440 command_runner.go:130] > #
	I0930 11:46:30.282614   45440 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0930 11:46:30.282623   45440 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0930 11:46:30.282631   45440 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0930 11:46:30.282634   45440 command_runner.go:130] > #
	I0930 11:46:30.282642   45440 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0930 11:46:30.282651   45440 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0930 11:46:30.282659   45440 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0930 11:46:30.282666   45440 command_runner.go:130] > #      only add mounts it finds in this file.
	I0930 11:46:30.282669   45440 command_runner.go:130] > #
	I0930 11:46:30.282674   45440 command_runner.go:130] > # default_mounts_file = ""
	I0930 11:46:30.282681   45440 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0930 11:46:30.282688   45440 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0930 11:46:30.282694   45440 command_runner.go:130] > pids_limit = 1024
	I0930 11:46:30.282701   45440 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0930 11:46:30.282709   45440 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0930 11:46:30.282718   45440 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0930 11:46:30.282726   45440 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0930 11:46:30.282732   45440 command_runner.go:130] > # log_size_max = -1
	I0930 11:46:30.282739   45440 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0930 11:46:30.282745   45440 command_runner.go:130] > # log_to_journald = false
	I0930 11:46:30.282751   45440 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0930 11:46:30.282758   45440 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0930 11:46:30.282763   45440 command_runner.go:130] > # Path to directory for container attach sockets.
	I0930 11:46:30.282770   45440 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0930 11:46:30.282775   45440 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0930 11:46:30.282781   45440 command_runner.go:130] > # bind_mount_prefix = ""
	I0930 11:46:30.282786   45440 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0930 11:46:30.282792   45440 command_runner.go:130] > # read_only = false
	I0930 11:46:30.282799   45440 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0930 11:46:30.282808   45440 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0930 11:46:30.282812   45440 command_runner.go:130] > # live configuration reload.
	I0930 11:46:30.282818   45440 command_runner.go:130] > # log_level = "info"
	I0930 11:46:30.282824   45440 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0930 11:46:30.282831   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.282835   45440 command_runner.go:130] > # log_filter = ""
	I0930 11:46:30.282843   45440 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0930 11:46:30.282850   45440 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0930 11:46:30.282856   45440 command_runner.go:130] > # separated by comma.
	I0930 11:46:30.282863   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282870   45440 command_runner.go:130] > # uid_mappings = ""
	I0930 11:46:30.282875   45440 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0930 11:46:30.282883   45440 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0930 11:46:30.282889   45440 command_runner.go:130] > # separated by comma.
	I0930 11:46:30.282897   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282903   45440 command_runner.go:130] > # gid_mappings = ""
	I0930 11:46:30.282910   45440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0930 11:46:30.282918   45440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 11:46:30.282924   45440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 11:46:30.282934   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282940   45440 command_runner.go:130] > # minimum_mappable_uid = -1
	I0930 11:46:30.282945   45440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0930 11:46:30.282953   45440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 11:46:30.282961   45440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 11:46:30.282968   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282974   45440 command_runner.go:130] > # minimum_mappable_gid = -1
	I0930 11:46:30.282980   45440 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0930 11:46:30.282988   45440 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0930 11:46:30.282996   45440 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0930 11:46:30.283002   45440 command_runner.go:130] > # ctr_stop_timeout = 30
	I0930 11:46:30.283008   45440 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0930 11:46:30.283016   45440 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0930 11:46:30.283021   45440 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0930 11:46:30.283028   45440 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0930 11:46:30.283032   45440 command_runner.go:130] > drop_infra_ctr = false
	I0930 11:46:30.283040   45440 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0930 11:46:30.283048   45440 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0930 11:46:30.283055   45440 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0930 11:46:30.283061   45440 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0930 11:46:30.283068   45440 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0930 11:46:30.283076   45440 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0930 11:46:30.283083   45440 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0930 11:46:30.283090   45440 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0930 11:46:30.283094   45440 command_runner.go:130] > # shared_cpuset = ""
	I0930 11:46:30.283102   45440 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0930 11:46:30.283109   45440 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0930 11:46:30.283113   45440 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0930 11:46:30.283122   45440 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0930 11:46:30.283128   45440 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0930 11:46:30.283133   45440 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0930 11:46:30.283143   45440 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0930 11:46:30.283151   45440 command_runner.go:130] > # enable_criu_support = false
	I0930 11:46:30.283159   45440 command_runner.go:130] > # Enable/disable the generation of the container,
	I0930 11:46:30.283171   45440 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0930 11:46:30.283180   45440 command_runner.go:130] > # enable_pod_events = false
	I0930 11:46:30.283190   45440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 11:46:30.283202   45440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 11:46:30.283213   45440 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0930 11:46:30.283221   45440 command_runner.go:130] > # default_runtime = "runc"
	I0930 11:46:30.283229   45440 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0930 11:46:30.283240   45440 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0930 11:46:30.283251   45440 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0930 11:46:30.283259   45440 command_runner.go:130] > # creation as a file is not desired either.
	I0930 11:46:30.283267   45440 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0930 11:46:30.283301   45440 command_runner.go:130] > # the hostname is being managed dynamically.
	I0930 11:46:30.283314   45440 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0930 11:46:30.283319   45440 command_runner.go:130] > # ]
	I0930 11:46:30.283326   45440 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0930 11:46:30.283334   45440 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0930 11:46:30.283340   45440 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0930 11:46:30.283348   45440 command_runner.go:130] > # Each entry in the table should follow the format:
	I0930 11:46:30.283352   45440 command_runner.go:130] > #
	I0930 11:46:30.283361   45440 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0930 11:46:30.283368   45440 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0930 11:46:30.283400   45440 command_runner.go:130] > # runtime_type = "oci"
	I0930 11:46:30.283408   45440 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0930 11:46:30.283413   45440 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0930 11:46:30.283420   45440 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0930 11:46:30.283424   45440 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0930 11:46:30.283429   45440 command_runner.go:130] > # monitor_env = []
	I0930 11:46:30.283436   45440 command_runner.go:130] > # privileged_without_host_devices = false
	I0930 11:46:30.283440   45440 command_runner.go:130] > # allowed_annotations = []
	I0930 11:46:30.283448   45440 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0930 11:46:30.283452   45440 command_runner.go:130] > # Where:
	I0930 11:46:30.283459   45440 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0930 11:46:30.283466   45440 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0930 11:46:30.283475   45440 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0930 11:46:30.283483   45440 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0930 11:46:30.283487   45440 command_runner.go:130] > #   in $PATH.
	I0930 11:46:30.283493   45440 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0930 11:46:30.283500   45440 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0930 11:46:30.283506   45440 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0930 11:46:30.283512   45440 command_runner.go:130] > #   state.
	I0930 11:46:30.283518   45440 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0930 11:46:30.283523   45440 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0930 11:46:30.283531   45440 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0930 11:46:30.283537   45440 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0930 11:46:30.283546   45440 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0930 11:46:30.283556   45440 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0930 11:46:30.283562   45440 command_runner.go:130] > #   The currently recognized values are:
	I0930 11:46:30.283569   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0930 11:46:30.283578   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0930 11:46:30.283586   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0930 11:46:30.283594   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0930 11:46:30.283605   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0930 11:46:30.283613   45440 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0930 11:46:30.283620   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0930 11:46:30.283628   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0930 11:46:30.283634   45440 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0930 11:46:30.283642   45440 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0930 11:46:30.283649   45440 command_runner.go:130] > #   deprecated option "conmon".
	I0930 11:46:30.283655   45440 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0930 11:46:30.283662   45440 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0930 11:46:30.283668   45440 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0930 11:46:30.283675   45440 command_runner.go:130] > #   should be moved to the container's cgroup
	I0930 11:46:30.283681   45440 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0930 11:46:30.283688   45440 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0930 11:46:30.283694   45440 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0930 11:46:30.283701   45440 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0930 11:46:30.283704   45440 command_runner.go:130] > #
	I0930 11:46:30.283709   45440 command_runner.go:130] > # Using the seccomp notifier feature:
	I0930 11:46:30.283714   45440 command_runner.go:130] > #
	I0930 11:46:30.283720   45440 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0930 11:46:30.283731   45440 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0930 11:46:30.283736   45440 command_runner.go:130] > #
	I0930 11:46:30.283742   45440 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0930 11:46:30.283750   45440 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0930 11:46:30.283753   45440 command_runner.go:130] > #
	I0930 11:46:30.283759   45440 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0930 11:46:30.283764   45440 command_runner.go:130] > # feature.
	I0930 11:46:30.283768   45440 command_runner.go:130] > #
	I0930 11:46:30.283777   45440 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0930 11:46:30.283785   45440 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0930 11:46:30.283791   45440 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0930 11:46:30.283799   45440 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0930 11:46:30.283807   45440 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0930 11:46:30.283810   45440 command_runner.go:130] > #
	I0930 11:46:30.283816   45440 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0930 11:46:30.283824   45440 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0930 11:46:30.283829   45440 command_runner.go:130] > #
	I0930 11:46:30.283835   45440 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0930 11:46:30.283843   45440 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0930 11:46:30.283846   45440 command_runner.go:130] > #
	I0930 11:46:30.283852   45440 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0930 11:46:30.283860   45440 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0930 11:46:30.283866   45440 command_runner.go:130] > # limitation.
	I0930 11:46:30.283872   45440 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0930 11:46:30.283878   45440 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0930 11:46:30.283882   45440 command_runner.go:130] > runtime_type = "oci"
	I0930 11:46:30.283889   45440 command_runner.go:130] > runtime_root = "/run/runc"
	I0930 11:46:30.283893   45440 command_runner.go:130] > runtime_config_path = ""
	I0930 11:46:30.283899   45440 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0930 11:46:30.283903   45440 command_runner.go:130] > monitor_cgroup = "pod"
	I0930 11:46:30.283907   45440 command_runner.go:130] > monitor_exec_cgroup = ""
	I0930 11:46:30.283913   45440 command_runner.go:130] > monitor_env = [
	I0930 11:46:30.283918   45440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 11:46:30.283923   45440 command_runner.go:130] > ]
	I0930 11:46:30.283927   45440 command_runner.go:130] > privileged_without_host_devices = false
	I0930 11:46:30.283936   45440 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0930 11:46:30.283943   45440 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0930 11:46:30.283950   45440 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0930 11:46:30.283959   45440 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0930 11:46:30.283968   45440 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0930 11:46:30.283976   45440 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0930 11:46:30.283985   45440 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0930 11:46:30.283995   45440 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0930 11:46:30.284001   45440 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0930 11:46:30.284010   45440 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0930 11:46:30.284016   45440 command_runner.go:130] > # Example:
	I0930 11:46:30.284020   45440 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0930 11:46:30.284027   45440 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0930 11:46:30.284031   45440 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0930 11:46:30.284038   45440 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0930 11:46:30.284042   45440 command_runner.go:130] > # cpuset = 0
	I0930 11:46:30.284048   45440 command_runner.go:130] > # cpushares = "0-1"
	I0930 11:46:30.284052   45440 command_runner.go:130] > # Where:
	I0930 11:46:30.284058   45440 command_runner.go:130] > # The workload name is workload-type.
	I0930 11:46:30.284065   45440 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0930 11:46:30.284072   45440 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0930 11:46:30.284077   45440 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0930 11:46:30.284087   45440 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0930 11:46:30.284095   45440 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0930 11:46:30.284102   45440 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0930 11:46:30.284108   45440 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0930 11:46:30.284115   45440 command_runner.go:130] > # Default value is set to true
	I0930 11:46:30.284120   45440 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0930 11:46:30.284127   45440 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0930 11:46:30.284132   45440 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0930 11:46:30.284138   45440 command_runner.go:130] > # Default value is set to 'false'
	I0930 11:46:30.284143   45440 command_runner.go:130] > # disable_hostport_mapping = false
	I0930 11:46:30.284153   45440 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0930 11:46:30.284160   45440 command_runner.go:130] > #
	I0930 11:46:30.284168   45440 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0930 11:46:30.284176   45440 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0930 11:46:30.284186   45440 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0930 11:46:30.284195   45440 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0930 11:46:30.284204   45440 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0930 11:46:30.284211   45440 command_runner.go:130] > [crio.image]
	I0930 11:46:30.284220   45440 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0930 11:46:30.284227   45440 command_runner.go:130] > # default_transport = "docker://"
	I0930 11:46:30.284236   45440 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0930 11:46:30.284245   45440 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0930 11:46:30.284250   45440 command_runner.go:130] > # global_auth_file = ""
	I0930 11:46:30.284257   45440 command_runner.go:130] > # The image used to instantiate infra containers.
	I0930 11:46:30.284262   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.284267   45440 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0930 11:46:30.284273   45440 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0930 11:46:30.284279   45440 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0930 11:46:30.284284   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.284287   45440 command_runner.go:130] > # pause_image_auth_file = ""
	I0930 11:46:30.284297   45440 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0930 11:46:30.284303   45440 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0930 11:46:30.284309   45440 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0930 11:46:30.284314   45440 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0930 11:46:30.284318   45440 command_runner.go:130] > # pause_command = "/pause"
	I0930 11:46:30.284324   45440 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0930 11:46:30.284330   45440 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0930 11:46:30.284335   45440 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0930 11:46:30.284343   45440 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0930 11:46:30.284348   45440 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0930 11:46:30.284354   45440 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0930 11:46:30.284358   45440 command_runner.go:130] > # pinned_images = [
	I0930 11:46:30.284361   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284366   45440 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0930 11:46:30.284372   45440 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0930 11:46:30.284378   45440 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0930 11:46:30.284383   45440 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0930 11:46:30.284388   45440 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0930 11:46:30.284395   45440 command_runner.go:130] > # signature_policy = ""
	I0930 11:46:30.284400   45440 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0930 11:46:30.284410   45440 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0930 11:46:30.284418   45440 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0930 11:46:30.284426   45440 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0930 11:46:30.284435   45440 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0930 11:46:30.284442   45440 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0930 11:46:30.284448   45440 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0930 11:46:30.284456   45440 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0930 11:46:30.284462   45440 command_runner.go:130] > # changing them here.
	I0930 11:46:30.284466   45440 command_runner.go:130] > # insecure_registries = [
	I0930 11:46:30.284471   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284477   45440 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0930 11:46:30.284484   45440 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0930 11:46:30.284488   45440 command_runner.go:130] > # image_volumes = "mkdir"
	I0930 11:46:30.284494   45440 command_runner.go:130] > # Temporary directory to use for storing big files
	I0930 11:46:30.284499   45440 command_runner.go:130] > # big_files_temporary_dir = ""
	I0930 11:46:30.284505   45440 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0930 11:46:30.284510   45440 command_runner.go:130] > # CNI plugins.
	I0930 11:46:30.284514   45440 command_runner.go:130] > [crio.network]
	I0930 11:46:30.284520   45440 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0930 11:46:30.284527   45440 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0930 11:46:30.284532   45440 command_runner.go:130] > # cni_default_network = ""
	I0930 11:46:30.284539   45440 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0930 11:46:30.284544   45440 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0930 11:46:30.284553   45440 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0930 11:46:30.284558   45440 command_runner.go:130] > # plugin_dirs = [
	I0930 11:46:30.284562   45440 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0930 11:46:30.284568   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284574   45440 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0930 11:46:30.284580   45440 command_runner.go:130] > [crio.metrics]
	I0930 11:46:30.284584   45440 command_runner.go:130] > # Globally enable or disable metrics support.
	I0930 11:46:30.284590   45440 command_runner.go:130] > enable_metrics = true
	I0930 11:46:30.284595   45440 command_runner.go:130] > # Specify enabled metrics collectors.
	I0930 11:46:30.284599   45440 command_runner.go:130] > # Per default all metrics are enabled.
	I0930 11:46:30.284608   45440 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0930 11:46:30.284614   45440 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0930 11:46:30.284622   45440 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0930 11:46:30.284626   45440 command_runner.go:130] > # metrics_collectors = [
	I0930 11:46:30.284632   45440 command_runner.go:130] > # 	"operations",
	I0930 11:46:30.284636   45440 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0930 11:46:30.284640   45440 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0930 11:46:30.284646   45440 command_runner.go:130] > # 	"operations_errors",
	I0930 11:46:30.284651   45440 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0930 11:46:30.284657   45440 command_runner.go:130] > # 	"image_pulls_by_name",
	I0930 11:46:30.284662   45440 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0930 11:46:30.284668   45440 command_runner.go:130] > # 	"image_pulls_failures",
	I0930 11:46:30.284672   45440 command_runner.go:130] > # 	"image_pulls_successes",
	I0930 11:46:30.284679   45440 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0930 11:46:30.284683   45440 command_runner.go:130] > # 	"image_layer_reuse",
	I0930 11:46:30.284690   45440 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0930 11:46:30.284694   45440 command_runner.go:130] > # 	"containers_oom_total",
	I0930 11:46:30.284700   45440 command_runner.go:130] > # 	"containers_oom",
	I0930 11:46:30.284704   45440 command_runner.go:130] > # 	"processes_defunct",
	I0930 11:46:30.284710   45440 command_runner.go:130] > # 	"operations_total",
	I0930 11:46:30.284714   45440 command_runner.go:130] > # 	"operations_latency_seconds",
	I0930 11:46:30.284720   45440 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0930 11:46:30.284725   45440 command_runner.go:130] > # 	"operations_errors_total",
	I0930 11:46:30.284731   45440 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0930 11:46:30.284735   45440 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0930 11:46:30.284742   45440 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0930 11:46:30.284746   45440 command_runner.go:130] > # 	"image_pulls_success_total",
	I0930 11:46:30.284752   45440 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0930 11:46:30.284756   45440 command_runner.go:130] > # 	"containers_oom_count_total",
	I0930 11:46:30.284763   45440 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0930 11:46:30.284767   45440 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0930 11:46:30.284773   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284778   45440 command_runner.go:130] > # The port on which the metrics server will listen.
	I0930 11:46:30.284783   45440 command_runner.go:130] > # metrics_port = 9090
	I0930 11:46:30.284790   45440 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0930 11:46:30.284794   45440 command_runner.go:130] > # metrics_socket = ""
	I0930 11:46:30.284802   45440 command_runner.go:130] > # The certificate for the secure metrics server.
	I0930 11:46:30.284808   45440 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0930 11:46:30.284816   45440 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0930 11:46:30.284823   45440 command_runner.go:130] > # certificate on any modification event.
	I0930 11:46:30.284827   45440 command_runner.go:130] > # metrics_cert = ""
	I0930 11:46:30.284834   45440 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0930 11:46:30.284838   45440 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0930 11:46:30.284844   45440 command_runner.go:130] > # metrics_key = ""
	I0930 11:46:30.284850   45440 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0930 11:46:30.284856   45440 command_runner.go:130] > [crio.tracing]
	I0930 11:46:30.284862   45440 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0930 11:46:30.284868   45440 command_runner.go:130] > # enable_tracing = false
	I0930 11:46:30.284873   45440 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0930 11:46:30.284878   45440 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0930 11:46:30.284886   45440 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0930 11:46:30.284894   45440 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0930 11:46:30.284898   45440 command_runner.go:130] > # CRI-O NRI configuration.
	I0930 11:46:30.284903   45440 command_runner.go:130] > [crio.nri]
	I0930 11:46:30.284908   45440 command_runner.go:130] > # Globally enable or disable NRI.
	I0930 11:46:30.284914   45440 command_runner.go:130] > # enable_nri = false
	I0930 11:46:30.284918   45440 command_runner.go:130] > # NRI socket to listen on.
	I0930 11:46:30.284925   45440 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0930 11:46:30.284929   45440 command_runner.go:130] > # NRI plugin directory to use.
	I0930 11:46:30.284936   45440 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0930 11:46:30.284940   45440 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0930 11:46:30.284947   45440 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0930 11:46:30.284952   45440 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0930 11:46:30.284958   45440 command_runner.go:130] > # nri_disable_connections = false
	I0930 11:46:30.284963   45440 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0930 11:46:30.284971   45440 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0930 11:46:30.284976   45440 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0930 11:46:30.284983   45440 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0930 11:46:30.284989   45440 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0930 11:46:30.284995   45440 command_runner.go:130] > [crio.stats]
	I0930 11:46:30.285001   45440 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0930 11:46:30.285008   45440 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0930 11:46:30.285012   45440 command_runner.go:130] > # stats_collection_period = 0
	I0930 11:46:30.285085   45440 cni.go:84] Creating CNI manager for ""
	I0930 11:46:30.285095   45440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 11:46:30.285105   45440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:46:30.285126   45440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.219 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-457103 NodeName:multinode-457103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:46:30.285278   45440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-457103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:46:30.285350   45440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:46:30.295631   45440 command_runner.go:130] > kubeadm
	I0930 11:46:30.295652   45440 command_runner.go:130] > kubectl
	I0930 11:46:30.295656   45440 command_runner.go:130] > kubelet
	I0930 11:46:30.295681   45440 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:46:30.295725   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 11:46:30.305927   45440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 11:46:30.324306   45440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:46:30.344271   45440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0930 11:46:30.363305   45440 ssh_runner.go:195] Run: grep 192.168.39.219	control-plane.minikube.internal$ /etc/hosts
	I0930 11:46:30.367795   45440 command_runner.go:130] > 192.168.39.219	control-plane.minikube.internal
	I0930 11:46:30.367871   45440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:46:30.526997   45440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:46:30.543048   45440 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103 for IP: 192.168.39.219
	I0930 11:46:30.543083   45440 certs.go:194] generating shared ca certs ...
	I0930 11:46:30.543105   45440 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:46:30.543279   45440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:46:30.543339   45440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:46:30.543353   45440 certs.go:256] generating profile certs ...
	I0930 11:46:30.543445   45440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/client.key
	I0930 11:46:30.543521   45440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.key.37ca6d7c
	I0930 11:46:30.543575   45440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.key
	I0930 11:46:30.543591   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:46:30.543610   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:46:30.543629   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:46:30.543649   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:46:30.543668   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:46:30.543687   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:46:30.543706   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:46:30.543725   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:46:30.543791   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:46:30.543846   45440 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:46:30.543860   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:46:30.543901   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:46:30.543934   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:46:30.543966   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:46:30.544020   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:46:30.544061   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.544081   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.544100   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.544713   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:46:30.573242   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:46:30.599704   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:46:30.625383   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:46:30.650839   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 11:46:30.678871   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:46:30.704951   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:46:30.731562   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:46:30.758841   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:46:30.785798   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:46:30.813233   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:46:30.840267   45440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:46:30.858022   45440 ssh_runner.go:195] Run: openssl version
	I0930 11:46:30.864503   45440 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0930 11:46:30.864588   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:46:30.876155   45440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.881192   45440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.881245   45440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.881314   45440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.887390   45440 command_runner.go:130] > 51391683
	I0930 11:46:30.887467   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:46:30.897407   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:46:30.909494   45440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.914418   45440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.914456   45440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.914509   45440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.920739   45440 command_runner.go:130] > 3ec20f2e
	I0930 11:46:30.920822   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:46:30.931094   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:46:30.942903   45440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.947924   45440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.948056   45440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.948118   45440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.954234   45440 command_runner.go:130] > b5213941
	I0930 11:46:30.954310   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:46:30.965078   45440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:46:30.970000   45440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:46:30.970024   45440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0930 11:46:30.970032   45440 command_runner.go:130] > Device: 253,1	Inode: 1054760     Links: 1
	I0930 11:46:30.970040   45440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 11:46:30.970048   45440 command_runner.go:130] > Access: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970054   45440 command_runner.go:130] > Modify: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970061   45440 command_runner.go:130] > Change: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970068   45440 command_runner.go:130] >  Birth: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970130   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:46:30.976125   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.976215   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:46:30.981871   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.982012   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:46:30.987644   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.987714   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:46:30.993385   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.993465   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:46:30.999580   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.999658   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:46:31.005530   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:31.005612   45440 kubeadm.go:392] StartCluster: {Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:46:31.005720   45440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:46:31.005762   45440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:46:31.048293   45440 command_runner.go:130] > bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de
	I0930 11:46:31.048331   45440 command_runner.go:130] > b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa
	I0930 11:46:31.048338   45440 command_runner.go:130] > 1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332
	I0930 11:46:31.048345   45440 command_runner.go:130] > 27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e
	I0930 11:46:31.048353   45440 command_runner.go:130] > d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867
	I0930 11:46:31.048360   45440 command_runner.go:130] > 81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd
	I0930 11:46:31.048365   45440 command_runner.go:130] > 985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28
	I0930 11:46:31.048379   45440 command_runner.go:130] > 14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06
	I0930 11:46:31.048403   45440 cri.go:89] found id: "bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de"
	I0930 11:46:31.048414   45440 cri.go:89] found id: "b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa"
	I0930 11:46:31.048420   45440 cri.go:89] found id: "1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332"
	I0930 11:46:31.048437   45440 cri.go:89] found id: "27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e"
	I0930 11:46:31.048440   45440 cri.go:89] found id: "d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867"
	I0930 11:46:31.048444   45440 cri.go:89] found id: "81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd"
	I0930 11:46:31.048446   45440 cri.go:89] found id: "985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28"
	I0930 11:46:31.048450   45440 cri.go:89] found id: "14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06"
	I0930 11:46:31.048453   45440 cri.go:89] found id: ""
	I0930 11:46:31.048501   45440 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.697188917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a77f9273-a440-4c58-8073-06ff86a22f18 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.713154270Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64e5cc48-5e49-4388-9ec8-9cc395e3d984 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.714083514Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hwwdc,Uid:c81334ea-fd48-4e97-9e43-8bd50dabf0cb,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696831835543267,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625182905Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-cchmp,Uid:6f096551-b87c-4aca-9345-b054e1af235a,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1727696798046264866,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625188566Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d3047bc0-15a5-4820-b5ff-4718909e1d7e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696798022937947,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T11:46:37.625187336Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&PodSandboxMetadata{Name:kube-proxy-77tjs,Uid:a40654ea-0812-44b3-bff6-ae1242330075,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1727696797987078190,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625174527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&PodSandboxMetadata{Name:kindnet-8bjzm,Uid:ccb25478-bf00-4afa-94a6-d1c0a2608112,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696797985451741,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625189612Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-457103,Uid:764330db4fc0ab8999abdc9a8ebfe6ee,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793176232415,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 764330db4fc0ab8999abdc9a8ebfe6ee,kubernetes.io/config.seen: 2024-09-30T11:46:32.621881894Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&PodSandboxMetadat
a{Name:kube-apiserver-multinode-457103,Uid:f3df031ba6c22d5394dc2ec28aa194c6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793169739332,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.219:8443,kubernetes.io/config.hash: f3df031ba6c22d5394dc2ec28aa194c6,kubernetes.io/config.seen: 2024-09-30T11:46:32.621877793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&PodSandboxMetadata{Name:etcd-multinode-457103,Uid:33eb14e8f7463b06abba97c49965f63f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793165720793,Labels:map[string]string{component: etcd,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.219:2379,kubernetes.io/config.hash: 33eb14e8f7463b06abba97c49965f63f,kubernetes.io/config.seen: 2024-09-30T11:46:32.621883806Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-457103,Uid:f59a8bdcb8289b0082097a1f5b7b8fe1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793143669235,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: f59a8bdcb8289b0082097a1f5b7b8fe1,kubernetes.io/config.seen: 2024-09-30T11:46:32.621882928Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=64e5cc48-5e49-4388-9ec8-9cc395e3d984 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.716688830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b2224e8-8088-4e18-ab06-1e888de18a5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.716768336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b2224e8-8088-4e18-ab06-1e888de18a5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.717713698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b2224e8-8088-4e18-ab06-1e888de18a5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.759586949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86fb68a9-a67d-4116-8295-964ee8a1c32a name=/runtime.v1.RuntimeService/Version
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.759687381Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86fb68a9-a67d-4116-8295-964ee8a1c32a name=/runtime.v1.RuntimeService/Version
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.763111566Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d961e9e8-2bbd-435e-867f-f067ccfb098a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.763642355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696898763605355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d961e9e8-2bbd-435e-867f-f067ccfb098a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.765182112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d445411-a56a-49a5-8b3b-5c0df8fb4419 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.765281898Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d445411-a56a-49a5-8b3b-5c0df8fb4419 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.765886981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d445411-a56a-49a5-8b3b-5c0df8fb4419 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.802840451Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0f093db2-8fbd-433a-896e-f4dec35dc639 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.803380626Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hwwdc,Uid:c81334ea-fd48-4e97-9e43-8bd50dabf0cb,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696831835543267,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625182905Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-cchmp,Uid:6f096551-b87c-4aca-9345-b054e1af235a,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1727696798046264866,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625188566Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d3047bc0-15a5-4820-b5ff-4718909e1d7e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696798022937947,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T11:46:37.625187336Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&PodSandboxMetadata{Name:kube-proxy-77tjs,Uid:a40654ea-0812-44b3-bff6-ae1242330075,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1727696797987078190,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625174527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&PodSandboxMetadata{Name:kindnet-8bjzm,Uid:ccb25478-bf00-4afa-94a6-d1c0a2608112,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696797985451741,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:46:37.625189612Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-457103,Uid:764330db4fc0ab8999abdc9a8ebfe6ee,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793176232415,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 764330db4fc0ab8999abdc9a8ebfe6ee,kubernetes.io/config.seen: 2024-09-30T11:46:32.621881894Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&PodSandboxMetadat
a{Name:kube-apiserver-multinode-457103,Uid:f3df031ba6c22d5394dc2ec28aa194c6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793169739332,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.219:8443,kubernetes.io/config.hash: f3df031ba6c22d5394dc2ec28aa194c6,kubernetes.io/config.seen: 2024-09-30T11:46:32.621877793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&PodSandboxMetadata{Name:etcd-multinode-457103,Uid:33eb14e8f7463b06abba97c49965f63f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793165720793,Labels:map[string]string{component: etcd,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.219:2379,kubernetes.io/config.hash: 33eb14e8f7463b06abba97c49965f63f,kubernetes.io/config.seen: 2024-09-30T11:46:32.621883806Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-457103,Uid:f59a8bdcb8289b0082097a1f5b7b8fe1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727696793143669235,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: f59a8bdcb8289b0082097a1f5b7b8fe1,kubernetes.io/config.seen: 2024-09-30T11:46:32.621882928Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-hwwdc,Uid:c81334ea-fd48-4e97-9e43-8bd50dabf0cb,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696471217292067,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:41:09.407556743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-cchmp,Uid:6f096551-b87c-4aca-9345-b054e1af235a,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696418278549970,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:40:16.472284239Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d3047bc0-15a5-4820-b5ff-4718909e1d7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696416774213542,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[
string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T11:40:16.468122111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&PodSandboxMetadata{Name:kindnet-8bjzm,Uid:ccb25478-bf00-4afa-94a6-d1c0a2608112,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696404434927276,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:40:04.103146416Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&PodSandboxMetadata{Name:kube-proxy-77tjs,Uid:a40654ea-0812-44b3-bff6-ae1242330075,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696404409169798,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T11:40:04.097463473Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-457103,Uid:764330db4fc0ab8999abdc9a8ebfe6ee,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696393713219597,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 764330db4fc0ab8999abdc9a8ebfe6ee,kubernetes.io/config.seen: 2024-09-30T11:39:53.222494179Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-457103,Uid:f59a8bdcb8289b0082097a1f5b7b8fe1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696393707343921,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f59a8bdcb8289b0082097a1f5b7b8fe1,kubernetes.io/config.seen: 2024-09-30T11:39:53.222495155Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&PodSandboxMetadata{Name:etcd-multinode-457103,Uid:33eb14e8f7463b06abba97c49965f63f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696393704189069,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-45710
3,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.219:2379,kubernetes.io/config.hash: 33eb14e8f7463b06abba97c49965f63f,kubernetes.io/config.seen: 2024-09-30T11:39:53.222489118Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-457103,Uid:f3df031ba6c22d5394dc2ec28aa194c6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727696393702699864,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint
: 192.168.39.219:8443,kubernetes.io/config.hash: f3df031ba6c22d5394dc2ec28aa194c6,kubernetes.io/config.seen: 2024-09-30T11:39:53.222492940Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0f093db2-8fbd-433a-896e-f4dec35dc639 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.805331231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=743293a5-c6f2-46e9-9ae8-2b8357374990 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.805482292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=743293a5-c6f2-46e9-9ae8-2b8357374990 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.805974754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=743293a5-c6f2-46e9-9ae8-2b8357374990 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.826243501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=696c8e12-6860-4da4-a889-5fc2f5c54aea name=/runtime.v1.RuntimeService/Version
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.826314395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=696c8e12-6860-4da4-a889-5fc2f5c54aea name=/runtime.v1.RuntimeService/Version
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.827820797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e580a54-bc79-4eb5-a398-e44a8335def1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.828308432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696898828282943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e580a54-bc79-4eb5-a398-e44a8335def1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.828845390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90f6e9fa-65f4-476d-bf1c-ee325db3d97c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.828937340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90f6e9fa-65f4-476d-bf1c-ee325db3d97c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:48:18 multinode-457103 crio[2743]: time="2024-09-30 11:48:18.829336989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90f6e9fa-65f4-476d-bf1c-ee325db3d97c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	18c91cea164eb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6b7fd835a17fe       busybox-7dff88458-hwwdc
	e42d74aa167a2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   1acc75474f478       kindnet-8bjzm
	4341d06ffd7db       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   96352937ab74a       coredns-7c65d6cfc9-cchmp
	7258510df4e37       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   040289e57fee2       kube-proxy-77tjs
	f7a92f8bb933f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   001b8db3f6eee       storage-provisioner
	eaf6af32d43e6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   5d919108fb3ea       kube-scheduler-multinode-457103
	92157ff23e9a4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   a38d93dbf5689       etcd-multinode-457103
	f90b4779f01dd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   7b23262f7de98       kube-controller-manager-multinode-457103
	67c22defd31e2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   47f0193bc86ee       kube-apiserver-multinode-457103
	04966fcb879e6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   a9d252ffc3694       busybox-7dff88458-hwwdc
	bee86c8246408       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   4bbc7b88ac1b3       coredns-7c65d6cfc9-cchmp
	b9b031192f52a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   6d2e9a6295dd7       storage-provisioner
	1692df8a76bd8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   941c0405e0fc6       kindnet-8bjzm
	27d4e50ea8999       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   7d109891f0ca1       kube-proxy-77tjs
	d0c657343cf2c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   e3c3ef7715258       kube-scheduler-multinode-457103
	81d3a3b58452b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   32d020b7d015c       kube-apiserver-multinode-457103
	985558f9028b4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   f05549c8abcc8       etcd-multinode-457103
	14bdb366a11a6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   e410f8da196c6       kube-controller-manager-multinode-457103
	
	
	==> coredns [4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56525 - 40399 "HINFO IN 4520725143726543011.5835794679418722259. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02717377s
	
	
	==> coredns [bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de] <==
	[INFO] 10.244.1.2:49166 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00214942s
	[INFO] 10.244.1.2:35848 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013174s
	[INFO] 10.244.1.2:43378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009867s
	[INFO] 10.244.1.2:38560 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001643951s
	[INFO] 10.244.1.2:49873 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196008s
	[INFO] 10.244.1.2:57622 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101982s
	[INFO] 10.244.1.2:33177 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126401s
	[INFO] 10.244.0.3:49103 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001388s
	[INFO] 10.244.0.3:53416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091022s
	[INFO] 10.244.0.3:59575 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118932s
	[INFO] 10.244.0.3:47749 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073111s
	[INFO] 10.244.1.2:39231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140327s
	[INFO] 10.244.1.2:59236 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161937s
	[INFO] 10.244.1.2:56200 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102831s
	[INFO] 10.244.1.2:40944 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095915s
	[INFO] 10.244.0.3:58989 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120725s
	[INFO] 10.244.0.3:52719 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00030147s
	[INFO] 10.244.0.3:60944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116043s
	[INFO] 10.244.0.3:58642 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001092s
	[INFO] 10.244.1.2:41523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238737s
	[INFO] 10.244.1.2:37343 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122519s
	[INFO] 10.244.1.2:45395 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107834s
	[INFO] 10.244.1.2:45571 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099939s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-457103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-457103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=multinode-457103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_40_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:39:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-457103
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:48:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    multinode-457103
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1745f688465a4020b2f275f7e9845e3f
	  System UUID:                1745f688-465a-4020-b2f2-75f7e9845e3f
	  Boot ID:                    470a25c9-ac55-4ff5-b4fb-26958d2f4a3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hwwdc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 coredns-7c65d6cfc9-cchmp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m15s
	  kube-system                 etcd-multinode-457103                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m20s
	  kube-system                 kindnet-8bjzm                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m15s
	  kube-system                 kube-apiserver-multinode-457103             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-controller-manager-multinode-457103    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-proxy-77tjs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-scheduler-multinode-457103             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m13s                  kube-proxy       
	  Normal  Starting                 100s                   kube-proxy       
	  Normal  Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node multinode-457103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node multinode-457103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s (x7 over 8m26s)  kubelet          Node multinode-457103 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m20s                  kubelet          Node multinode-457103 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m20s                  kubelet          Node multinode-457103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m20s                  kubelet          Node multinode-457103 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m16s                  node-controller  Node multinode-457103 event: Registered Node multinode-457103 in Controller
	  Normal  NodeReady                8m3s                   kubelet          Node multinode-457103 status is now: NodeReady
	  Normal  Starting                 107s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s (x8 over 107s)    kubelet          Node multinode-457103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x8 over 107s)    kubelet          Node multinode-457103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x7 over 107s)    kubelet          Node multinode-457103 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                    node-controller  Node multinode-457103 event: Registered Node multinode-457103 in Controller
	
	
	Name:               multinode-457103-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-457103-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=multinode-457103
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_47_18_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:47:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-457103-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:48:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:47:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:47:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:47:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    multinode-457103-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 864d40ee5b994d73a12720b6ae83e95c
	  System UUID:                864d40ee-5b99-4d73-a127-20b6ae83e95c
	  Boot ID:                    00fb2fa0-efa5-41c8-806c-9abcc6a638b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wxt9x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-rb7dr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m32s
	  kube-system                 kube-proxy-dg4xz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m27s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m32s (x2 over 7m33s)  kubelet     Node multinode-457103-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s (x2 over 7m33s)  kubelet     Node multinode-457103-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s (x2 over 7m33s)  kubelet     Node multinode-457103-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m32s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m12s                  kubelet     Node multinode-457103-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-457103-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-457103-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-457103-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                43s                    kubelet     Node multinode-457103-m02 status is now: NodeReady
	
	
	Name:               multinode-457103-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-457103-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=multinode-457103
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_47_57_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:47:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-457103-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:48:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:48:15 +0000   Mon, 30 Sep 2024 11:47:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:48:15 +0000   Mon, 30 Sep 2024 11:47:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:48:15 +0000   Mon, 30 Sep 2024 11:47:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:48:15 +0000   Mon, 30 Sep 2024 11:48:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    multinode-457103-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 981e1566605a493d9314cbc42ae6b9c0
	  System UUID:                981e1566-605a-493d-9314-cbc42ae6b9c0
	  Boot ID:                    3b158172-7fc4-4d5f-8cea-b2cc70150a2b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nr59l       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m36s
	  kube-system                 kube-proxy-dkgwm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m41s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m36s)  kubelet     Node multinode-457103-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m36s)  kubelet     Node multinode-457103-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m36s)  kubelet     Node multinode-457103-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m16s                  kubelet     Node multinode-457103-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet     Node multinode-457103-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet     Node multinode-457103-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet     Node multinode-457103-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m46s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m27s                  kubelet     Node multinode-457103-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-457103-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-457103-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-457103-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-457103-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.067905] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060442] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.153422] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.143437] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.297361] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.063117] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +5.155123] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.057543] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.484402] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.079463] kauditd_printk_skb: 69 callbacks suppressed
	[Sep30 11:40] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +0.141675] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.299430] kauditd_printk_skb: 60 callbacks suppressed
	[Sep30 11:41] kauditd_printk_skb: 14 callbacks suppressed
	[Sep30 11:46] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.153090] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.179178] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.157622] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.288908] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +0.695516] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +1.986089] systemd-fstab-generator[2950]: Ignoring "noauto" option for root device
	[  +5.801799] kauditd_printk_skb: 184 callbacks suppressed
	[ +15.099055] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.096138] kauditd_printk_skb: 36 callbacks suppressed
	[Sep30 11:47] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd] <==
	{"level":"info","ts":"2024-09-30T11:46:33.971494Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","added-peer-id":"28ab8665a749e374","added-peer-peer-urls":["https://192.168.39.219:2380"]}
	{"level":"info","ts":"2024-09-30T11:46:33.971643Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:46:33.971695Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:46:33.973810Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T11:46:33.983956Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T11:46:33.986862Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"28ab8665a749e374","initial-advertise-peer-urls":["https://192.168.39.219:2380"],"listen-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.219:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T11:46:33.988167Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T11:46:33.984194Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:46:33.994681Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:46:35.517796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:46:35.517920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:46:35.517979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgPreVoteResp from 28ab8665a749e374 at term 2"}
	{"level":"info","ts":"2024-09-30T11:46:35.518025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.518050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgVoteResp from 28ab8665a749e374 at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.518089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.518115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28ab8665a749e374 elected leader 28ab8665a749e374 at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.522867Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28ab8665a749e374","local-member-attributes":"{Name:multinode-457103 ClientURLs:[https://192.168.39.219:2379]}","request-path":"/0/members/28ab8665a749e374/attributes","cluster-id":"14fc06d09ccfd789","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T11:46:35.522978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:46:35.523540Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:46:35.524448Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T11:46:35.524561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T11:46:35.525460Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T11:46:35.525642Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T11:46:35.525675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T11:46:35.525460Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.219:2379"}
	
	
	==> etcd [985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28] <==
	{"level":"info","ts":"2024-09-30T11:39:54.844083Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T11:39:54.844132Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T11:39:54.803151Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28ab8665a749e374","local-member-attributes":"{Name:multinode-457103 ClientURLs:[https://192.168.39.219:2379]}","request-path":"/0/members/28ab8665a749e374/attributes","cluster-id":"14fc06d09ccfd789","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T11:39:54.847488Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:39:54.847600Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:39:54.847645Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:40:07.485787Z","caller":"traceutil/trace.go:171","msg":"trace[913904409] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"118.633232ms","start":"2024-09-30T11:40:07.367140Z","end":"2024-09-30T11:40:07.485774Z","steps":["trace[913904409] 'process raft request'  (duration: 118.532637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:40:47.030433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.94418ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16389885759208307633 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-457103-m02.17fa02c3bea6ab6a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-457103-m02.17fa02c3bea6ab6a\" value_size:646 lease:7166513722353530860 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-30T11:40:47.030823Z","caller":"traceutil/trace.go:171","msg":"trace[1785805709] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"212.433999ms","start":"2024-09-30T11:40:46.818358Z","end":"2024-09-30T11:40:47.030792Z","steps":["trace[1785805709] 'process raft request'  (duration: 80.397287ms)","trace[1785805709] 'compare'  (duration: 130.79024ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T11:40:53.384655Z","caller":"traceutil/trace.go:171","msg":"trace[1454558008] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"205.370784ms","start":"2024-09-30T11:40:53.179270Z","end":"2024-09-30T11:40:53.384641Z","steps":["trace[1454558008] 'process raft request'  (duration: 204.955318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:41:43.612976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.539966ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T11:41:43.613540Z","caller":"traceutil/trace.go:171","msg":"trace[2022186823] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:613; }","duration":"175.915098ms","start":"2024-09-30T11:41:43.437297Z","end":"2024-09-30T11:41:43.613212Z","steps":["trace[2022186823] 'range keys from in-memory index tree'  (duration: 175.515567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:41:43.617977Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.409112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T11:41:43.618037Z","caller":"traceutil/trace.go:171","msg":"trace[247210808] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:613; }","duration":"112.484138ms","start":"2024-09-30T11:41:43.505537Z","end":"2024-09-30T11:41:43.618022Z","steps":["trace[247210808] 'agreement among raft nodes before linearized reading'  (duration: 112.387056ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T11:41:43.618282Z","caller":"traceutil/trace.go:171","msg":"trace[1798779071] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"112.167649ms","start":"2024-09-30T11:41:43.505606Z","end":"2024-09-30T11:41:43.617774Z","steps":["trace[1798779071] 'read index received'  (duration: 100.194025ms)","trace[1798779071] 'applied index is now lower than readState.Index'  (duration: 11.972587ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T11:44:57.648085Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-30T11:44:57.648236Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-457103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"]}
	{"level":"warn","ts":"2024-09-30T11:44:57.650117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:44:57.650307Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:44:57.689663Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.219:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:44:57.689698Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.219:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T11:44:57.689751Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28ab8665a749e374","current-leader-member-id":"28ab8665a749e374"}
	{"level":"info","ts":"2024-09-30T11:44:57.692584Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:44:57.692791Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:44:57.692851Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-457103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"]}
	
	
	==> kernel <==
	 11:48:19 up 8 min,  0 users,  load average: 0.11, 0.17, 0.10
	Linux multinode-457103 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332] <==
	I0930 11:44:16.271981       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:26.271915       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:26.272120       1 main.go:299] handling current node
	I0930 11:44:26.272158       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:26.272192       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:26.272464       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:26.272497       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:36.266987       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:36.267155       1 main.go:299] handling current node
	I0930 11:44:36.267226       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:36.267234       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:36.267530       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:36.267556       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:46.271058       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:46.271142       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:46.271314       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:46.271339       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:46.271526       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:46.271551       1 main.go:299] handling current node
	I0930 11:44:56.265017       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:56.265186       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:56.265514       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:56.265555       1 main.go:299] handling current node
	I0930 11:44:56.265569       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:56.265575       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150] <==
	I0930 11:47:29.482873       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:47:39.482478       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:47:39.482530       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:47:39.482681       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:47:39.482716       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:47:39.482829       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:47:39.482839       1 main.go:299] handling current node
	I0930 11:47:49.482362       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:47:49.482574       1 main.go:299] handling current node
	I0930 11:47:49.482616       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:47:49.482646       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:47:49.482849       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:47:49.482902       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:47:59.482886       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:47:59.483033       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.2.0/24] 
	I0930 11:47:59.483230       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:47:59.483260       1 main.go:299] handling current node
	I0930 11:47:59.483283       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:47:59.483299       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:48:09.482986       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:48:09.483084       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:48:09.483235       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:48:09.483276       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.2.0/24] 
	I0930 11:48:09.483363       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:48:09.483450       1 main.go:299] handling current node
	
	
	==> kube-apiserver [67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65] <==
	I0930 11:46:37.006115       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 11:46:37.006354       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 11:46:37.006499       1 aggregator.go:171] initial CRD sync complete...
	I0930 11:46:37.006508       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 11:46:37.006512       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 11:46:37.006517       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:46:37.006878       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 11:46:37.006946       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:46:37.013206       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 11:46:37.056765       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 11:46:37.056859       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 11:46:37.057006       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E0930 11:46:37.063788       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 11:46:37.064371       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 11:46:37.070962       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:46:37.071034       1 policy_source.go:224] refreshing policies
	I0930 11:46:37.074709       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:46:37.861966       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 11:46:39.339262       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 11:46:39.484747       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 11:46:39.505607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 11:46:39.585133       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 11:46:39.596632       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 11:46:40.354285       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 11:46:40.699513       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd] <==
	W0930 11:44:57.680829       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.680858       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.680889       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.680914       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681125       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681174       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681220       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681253       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681285       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681322       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681355       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681454       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.683571       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684133       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684183       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684219       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684254       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685229       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685275       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685302       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685327       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685366       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685524       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.686249       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.686298       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06] <==
	I0930 11:42:32.295361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:42:32.296124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.387270       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:42:33.388612       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-457103-m03\" does not exist"
	I0930 11:42:33.400322       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-457103-m03" podCIDRs=["10.244.3.0/24"]
	I0930 11:42:33.400363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.400462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.410037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.817223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:34.161895       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:38.454702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:43.710884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:52.103851       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:42:52.104004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:52.116883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:53.383193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:43:33.400260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:43:33.400660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m03"
	I0930 11:43:33.415757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:43:33.466240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.629707ms"
	I0930 11:43:33.466597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.756µs"
	I0930 11:43:38.468232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:43:38.484490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:43:38.559900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:43:48.645037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	
	
	==> kube-controller-manager [f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e] <==
	I0930 11:47:36.412855       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:47:36.427559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:47:36.445997       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.362µs"
	I0930 11:47:36.473042       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.565µs"
	I0930 11:47:38.022453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.411685ms"
	I0930 11:47:38.022843       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.778µs"
	I0930 11:47:40.358152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:47:48.079025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:47:55.294190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:55.310542       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:55.553764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:55.554063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:47:57.193267       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:47:57.194004       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-457103-m03\" does not exist"
	I0930 11:47:57.209145       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-457103-m03" podCIDRs=["10.244.2.0/24"]
	I0930 11:47:57.209275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.209982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.220708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.594039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.918297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:00.379973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:07.261833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:15.822161       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m03"
	I0930 11:48:15.822379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:15.835620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	
	
	==> kube-proxy [27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:40:05.504472       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:40:05.605877       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
	E0930 11:40:05.607818       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:40:05.670376       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:40:05.670469       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:40:05.670494       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:40:05.674557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:40:05.674824       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:40:05.674857       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:40:05.677059       1 config.go:199] "Starting service config controller"
	I0930 11:40:05.677101       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:40:05.677126       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:40:05.677130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:40:05.677846       1 config.go:328] "Starting node config controller"
	I0930 11:40:05.677875       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:40:05.777161       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:40:05.777244       1 shared_informer.go:320] Caches are synced for service config
	I0930 11:40:05.778703       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:46:38.838355       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:46:38.872647       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
	E0930 11:46:38.872749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:46:38.958626       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:46:38.958659       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:46:38.958683       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:46:38.975570       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:46:38.975958       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:46:38.975974       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:46:38.985008       1 config.go:199] "Starting service config controller"
	I0930 11:46:38.985053       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:46:38.985079       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:46:38.985084       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:46:38.985112       1 config.go:328] "Starting node config controller"
	I0930 11:46:38.985134       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:46:39.085232       1 shared_informer.go:320] Caches are synced for node config
	I0930 11:46:39.085282       1 shared_informer.go:320] Caches are synced for service config
	I0930 11:46:39.085303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867] <==
	E0930 11:39:56.589230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.441225       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 11:39:57.441289       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 11:39:57.467689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 11:39:57.467825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.514752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 11:39:57.514874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.581376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 11:39:57.581500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.627985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 11:39:57.628045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.665381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 11:39:57.665478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.718176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 11:39:57.718231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.760155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 11:39:57.760213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.804591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 11:39:57.804644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.876053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 11:39:57.876112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.947148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 11:39:57.947325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 11:39:59.657690       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:44:57.645601       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d] <==
	I0930 11:46:34.681821       1 serving.go:386] Generated self-signed cert in-memory
	W0930 11:46:36.929216       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 11:46:36.929475       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 11:46:36.929567       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 11:46:36.929597       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 11:46:36.968599       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 11:46:36.968703       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:46:36.980381       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 11:46:36.980664       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:46:36.980515       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 11:46:36.980573       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 11:46:37.082932       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 11:46:42 multinode-457103 kubelet[2957]: E0930 11:46:42.705847    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696802705301853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:46:42 multinode-457103 kubelet[2957]: E0930 11:46:42.705891    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696802705301853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:46:52 multinode-457103 kubelet[2957]: E0930 11:46:52.707294    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696812706960288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:46:52 multinode-457103 kubelet[2957]: E0930 11:46:52.707341    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696812706960288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:02 multinode-457103 kubelet[2957]: E0930 11:47:02.709587    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696822709002404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:02 multinode-457103 kubelet[2957]: E0930 11:47:02.709923    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696822709002404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:12 multinode-457103 kubelet[2957]: E0930 11:47:12.711912    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696832711550275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:12 multinode-457103 kubelet[2957]: E0930 11:47:12.711958    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696832711550275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:22 multinode-457103 kubelet[2957]: E0930 11:47:22.713821    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696842713374891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:22 multinode-457103 kubelet[2957]: E0930 11:47:22.713847    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696842713374891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:32 multinode-457103 kubelet[2957]: E0930 11:47:32.716166    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696852715688489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:32 multinode-457103 kubelet[2957]: E0930 11:47:32.716266    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696852715688489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:32 multinode-457103 kubelet[2957]: E0930 11:47:32.737271    2957 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:47:32 multinode-457103 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:47:32 multinode-457103 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:47:32 multinode-457103 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:47:32 multinode-457103 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:47:42 multinode-457103 kubelet[2957]: E0930 11:47:42.718469    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696862717918040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:42 multinode-457103 kubelet[2957]: E0930 11:47:42.718493    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696862717918040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:52 multinode-457103 kubelet[2957]: E0930 11:47:52.720600    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696872720126445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:47:52 multinode-457103 kubelet[2957]: E0930 11:47:52.720651    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696872720126445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:48:02 multinode-457103 kubelet[2957]: E0930 11:48:02.723235    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696882722731876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:48:02 multinode-457103 kubelet[2957]: E0930 11:48:02.723840    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696882722731876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:48:12 multinode-457103 kubelet[2957]: E0930 11:48:12.726015    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696892725712366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:48:12 multinode-457103 kubelet[2957]: E0930 11:48:12.726041    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696892725712366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:48:18.334905   46571 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19734-3842/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-457103 -n multinode-457103
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-457103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 stop
E0930 11:50:18.067270   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-457103 stop: exit status 82 (2m0.47713328s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-457103-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-457103 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-457103 status: (18.767821527s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr: (3.391988832s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-457103 -n multinode-457103
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-457103 logs -n 25: (1.454897971s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103:/home/docker/cp-test_multinode-457103-m02_multinode-457103.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103 sudo cat                                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m02_multinode-457103.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03:/home/docker/cp-test_multinode-457103-m02_multinode-457103-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103-m03 sudo cat                                   | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m02_multinode-457103-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp testdata/cp-test.txt                                                | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile377977775/001/cp-test_multinode-457103-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103:/home/docker/cp-test_multinode-457103-m03_multinode-457103.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103 sudo cat                                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m03_multinode-457103.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt                       | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02:/home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103-m02 sudo cat                                   | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-457103 node stop m03                                                          | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	| node    | multinode-457103 node start                                                             | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-457103                                                                | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC |                     |
	| stop    | -p multinode-457103                                                                     | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC |                     |
	| start   | -p multinode-457103                                                                     | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:44 UTC | 30 Sep 24 11:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-457103                                                                | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:48 UTC |                     |
	| node    | multinode-457103 node delete                                                            | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:48 UTC | 30 Sep 24 11:48 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-457103 stop                                                                   | multinode-457103 | jenkins | v1.34.0 | 30 Sep 24 11:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:44:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:44:56.700343   45440 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:44:56.700570   45440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:44:56.700578   45440 out.go:358] Setting ErrFile to fd 2...
	I0930 11:44:56.700583   45440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:44:56.700770   45440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:44:56.701309   45440 out.go:352] Setting JSON to false
	I0930 11:44:56.702228   45440 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5244,"bootTime":1727691453,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:44:56.702328   45440 start.go:139] virtualization: kvm guest
	I0930 11:44:56.704755   45440 out.go:177] * [multinode-457103] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:44:56.706021   45440 notify.go:220] Checking for updates...
	I0930 11:44:56.706051   45440 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:44:56.707423   45440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:44:56.708732   45440 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:44:56.709818   45440 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:44:56.710914   45440 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:44:56.712057   45440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:44:56.713666   45440 config.go:182] Loaded profile config "multinode-457103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:44:56.713772   45440 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:44:56.714251   45440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:44:56.714308   45440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:44:56.729481   45440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0930 11:44:56.729948   45440 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:44:56.730506   45440 main.go:141] libmachine: Using API Version  1
	I0930 11:44:56.730528   45440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:44:56.730896   45440 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:44:56.731091   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:44:56.767348   45440 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:44:56.768735   45440 start.go:297] selected driver: kvm2
	I0930 11:44:56.768750   45440 start.go:901] validating driver "kvm2" against &{Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:44:56.768939   45440 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:44:56.769337   45440 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:44:56.769429   45440 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:44:56.785234   45440 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:44:56.785969   45440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:44:56.786005   45440 cni.go:84] Creating CNI manager for ""
	I0930 11:44:56.786062   45440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 11:44:56.786132   45440 start.go:340] cluster config:
	{Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-457103 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:44:56.786264   45440 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:44:56.787964   45440 out.go:177] * Starting "multinode-457103" primary control-plane node in "multinode-457103" cluster
	I0930 11:44:56.789500   45440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:44:56.789556   45440 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 11:44:56.789564   45440 cache.go:56] Caching tarball of preloaded images
	I0930 11:44:56.789665   45440 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 11:44:56.789677   45440 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 11:44:56.789798   45440 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/config.json ...
	I0930 11:44:56.789984   45440 start.go:360] acquireMachinesLock for multinode-457103: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:44:56.790023   45440 start.go:364] duration metric: took 22.945µs to acquireMachinesLock for "multinode-457103"
	I0930 11:44:56.790043   45440 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:44:56.790051   45440 fix.go:54] fixHost starting: 
	I0930 11:44:56.790295   45440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:44:56.790326   45440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:44:56.805203   45440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0930 11:44:56.805738   45440 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:44:56.806308   45440 main.go:141] libmachine: Using API Version  1
	I0930 11:44:56.806333   45440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:44:56.806739   45440 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:44:56.806945   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:44:56.807117   45440 main.go:141] libmachine: (multinode-457103) Calling .GetState
	I0930 11:44:56.808801   45440 fix.go:112] recreateIfNeeded on multinode-457103: state=Running err=<nil>
	W0930 11:44:56.808820   45440 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:44:56.810831   45440 out.go:177] * Updating the running kvm2 "multinode-457103" VM ...
	I0930 11:44:56.811978   45440 machine.go:93] provisionDockerMachine start ...
	I0930 11:44:56.812008   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:44:56.812255   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:56.815815   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.816428   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:56.816458   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.816706   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:56.816915   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.817089   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.817252   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:56.817419   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:56.817680   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:56.817696   45440 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:44:56.924097   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-457103
	
	I0930 11:44:56.924131   45440 main.go:141] libmachine: (multinode-457103) Calling .GetMachineName
	I0930 11:44:56.924385   45440 buildroot.go:166] provisioning hostname "multinode-457103"
	I0930 11:44:56.924414   45440 main.go:141] libmachine: (multinode-457103) Calling .GetMachineName
	I0930 11:44:56.924608   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:56.927185   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.927579   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:56.927618   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:56.927766   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:56.927928   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.928079   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:56.928168   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:56.928285   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:56.928487   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:56.928512   45440 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-457103 && echo "multinode-457103" | sudo tee /etc/hostname
	I0930 11:44:57.042905   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-457103
	
	I0930 11:44:57.042931   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.045844   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.046220   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.046239   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.046468   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:57.046671   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.046836   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.046963   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:57.047102   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:57.047315   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:57.047334   45440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-457103' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-457103/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-457103' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:44:57.150747   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:44:57.150778   45440 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:44:57.150819   45440 buildroot.go:174] setting up certificates
	I0930 11:44:57.150829   45440 provision.go:84] configureAuth start
	I0930 11:44:57.150838   45440 main.go:141] libmachine: (multinode-457103) Calling .GetMachineName
	I0930 11:44:57.151079   45440 main.go:141] libmachine: (multinode-457103) Calling .GetIP
	I0930 11:44:57.153936   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.154257   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.154285   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.154420   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.156439   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.156922   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.156948   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.157069   45440 provision.go:143] copyHostCerts
	I0930 11:44:57.157094   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:44:57.157126   45440 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:44:57.157135   45440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:44:57.157201   45440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:44:57.157290   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:44:57.157315   45440 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:44:57.157322   45440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:44:57.157350   45440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:44:57.157407   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:44:57.157423   45440 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:44:57.157430   45440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:44:57.157451   45440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:44:57.157495   45440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.multinode-457103 san=[127.0.0.1 192.168.39.219 localhost minikube multinode-457103]
	I0930 11:44:57.354081   45440 provision.go:177] copyRemoteCerts
	I0930 11:44:57.354140   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:44:57.354164   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.356892   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.357297   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.357327   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.357509   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:57.357716   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.357884   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:57.357999   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:44:57.443016   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0930 11:44:57.443095   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:44:57.471080   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0930 11:44:57.471181   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0930 11:44:57.497998   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0930 11:44:57.498084   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 11:44:57.524475   45440 provision.go:87] duration metric: took 373.631513ms to configureAuth
	I0930 11:44:57.524507   45440 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:44:57.524747   45440 config.go:182] Loaded profile config "multinode-457103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:44:57.524832   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:44:57.527330   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.527724   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:44:57.527745   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:44:57.527916   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:44:57.528101   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.528232   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:44:57.528414   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:44:57.528554   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:44:57.528750   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:44:57.528771   45440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:46:28.300738   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:46:28.300766   45440 machine.go:96] duration metric: took 1m31.488768595s to provisionDockerMachine
	I0930 11:46:28.300780   45440 start.go:293] postStartSetup for "multinode-457103" (driver="kvm2")
	I0930 11:46:28.300794   45440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:46:28.300814   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.301128   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:46:28.301155   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.304242   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.304638   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.304663   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.304831   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.305010   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.305195   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.305305   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:46:28.385486   45440 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:46:28.390398   45440 command_runner.go:130] > NAME=Buildroot
	I0930 11:46:28.390420   45440 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0930 11:46:28.390432   45440 command_runner.go:130] > ID=buildroot
	I0930 11:46:28.390437   45440 command_runner.go:130] > VERSION_ID=2023.02.9
	I0930 11:46:28.390444   45440 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0930 11:46:28.390494   45440 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:46:28.390517   45440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:46:28.390576   45440 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:46:28.390651   45440 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:46:28.390661   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /etc/ssl/certs/110092.pem
	I0930 11:46:28.390739   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:46:28.400587   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:46:28.424850   45440 start.go:296] duration metric: took 124.054082ms for postStartSetup
	I0930 11:46:28.424913   45440 fix.go:56] duration metric: took 1m31.63485055s for fixHost
	I0930 11:46:28.424943   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.427593   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.428042   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.428095   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.428196   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.428372   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.428556   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.428672   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.428825   45440 main.go:141] libmachine: Using SSH client type: native
	I0930 11:46:28.429022   45440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0930 11:46:28.429037   45440 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:46:28.534858   45440 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727696788.509603613
	
	I0930 11:46:28.534886   45440 fix.go:216] guest clock: 1727696788.509603613
	I0930 11:46:28.534896   45440 fix.go:229] Guest: 2024-09-30 11:46:28.509603613 +0000 UTC Remote: 2024-09-30 11:46:28.424918658 +0000 UTC m=+91.761087374 (delta=84.684955ms)
	I0930 11:46:28.534927   45440 fix.go:200] guest clock delta is within tolerance: 84.684955ms
	I0930 11:46:28.534932   45440 start.go:83] releasing machines lock for "multinode-457103", held for 1m31.744900385s
	I0930 11:46:28.534957   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.535206   45440 main.go:141] libmachine: (multinode-457103) Calling .GetIP
	I0930 11:46:28.538073   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.538447   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.538477   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.538663   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.539323   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.539489   45440 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:46:28.539580   45440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:46:28.539637   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.539704   45440 ssh_runner.go:195] Run: cat /version.json
	I0930 11:46:28.539727   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:46:28.542318   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.542782   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.542816   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.542837   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.542919   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.543075   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.543217   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:28.543231   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.543244   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:28.543383   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:46:28.543381   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:46:28.543489   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:46:28.543592   45440 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:46:28.543674   45440 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:46:28.618518   45440 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0930 11:46:28.618852   45440 ssh_runner.go:195] Run: systemctl --version
	I0930 11:46:28.645477   45440 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0930 11:46:28.645574   45440 command_runner.go:130] > systemd 252 (252)
	I0930 11:46:28.645596   45440 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0930 11:46:28.645676   45440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:46:28.803395   45440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 11:46:28.813419   45440 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0930 11:46:28.813844   45440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:46:28.813919   45440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:46:28.824264   45440 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 11:46:28.824304   45440 start.go:495] detecting cgroup driver to use...
	I0930 11:46:28.824373   45440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:46:28.842439   45440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:46:28.858154   45440 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:46:28.858226   45440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:46:28.873826   45440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:46:28.889969   45440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:46:29.046126   45440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:46:29.199394   45440 docker.go:233] disabling docker service ...
	I0930 11:46:29.199471   45440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:46:29.217688   45440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:46:29.232581   45440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:46:29.391092   45440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:46:29.540010   45440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:46:29.554384   45440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:46:29.575499   45440 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0930 11:46:29.575540   45440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 11:46:29.575588   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.586819   45440 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:46:29.586878   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.597945   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.608724   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.619764   45440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:46:29.630871   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.641786   45440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.653373   45440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:46:29.663886   45440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:46:29.673739   45440 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0930 11:46:29.673815   45440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:46:29.683764   45440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:46:29.824704   45440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:46:30.021787   45440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:46:30.021850   45440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:46:30.027254   45440 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0930 11:46:30.027294   45440 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0930 11:46:30.027304   45440 command_runner.go:130] > Device: 0,22	Inode: 1305        Links: 1
	I0930 11:46:30.027313   45440 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 11:46:30.027321   45440 command_runner.go:130] > Access: 2024-09-30 11:46:29.890850592 +0000
	I0930 11:46:30.027330   45440 command_runner.go:130] > Modify: 2024-09-30 11:46:29.890850592 +0000
	I0930 11:46:30.027339   45440 command_runner.go:130] > Change: 2024-09-30 11:46:29.890850592 +0000
	I0930 11:46:30.027348   45440 command_runner.go:130] >  Birth: -
	I0930 11:46:30.027373   45440 start.go:563] Will wait 60s for crictl version
	I0930 11:46:30.027442   45440 ssh_runner.go:195] Run: which crictl
	I0930 11:46:30.031915   45440 command_runner.go:130] > /usr/bin/crictl
	I0930 11:46:30.032089   45440 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:46:30.077747   45440 command_runner.go:130] > Version:  0.1.0
	I0930 11:46:30.077804   45440 command_runner.go:130] > RuntimeName:  cri-o
	I0930 11:46:30.077813   45440 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0930 11:46:30.077822   45440 command_runner.go:130] > RuntimeApiVersion:  v1
	I0930 11:46:30.077915   45440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:46:30.077974   45440 ssh_runner.go:195] Run: crio --version
	I0930 11:46:30.107067   45440 command_runner.go:130] > crio version 1.29.1
	I0930 11:46:30.107091   45440 command_runner.go:130] > Version:        1.29.1
	I0930 11:46:30.107098   45440 command_runner.go:130] > GitCommit:      unknown
	I0930 11:46:30.107102   45440 command_runner.go:130] > GitCommitDate:  unknown
	I0930 11:46:30.107106   45440 command_runner.go:130] > GitTreeState:   clean
	I0930 11:46:30.107112   45440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 11:46:30.107116   45440 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 11:46:30.107120   45440 command_runner.go:130] > Compiler:       gc
	I0930 11:46:30.107128   45440 command_runner.go:130] > Platform:       linux/amd64
	I0930 11:46:30.107132   45440 command_runner.go:130] > Linkmode:       dynamic
	I0930 11:46:30.107136   45440 command_runner.go:130] > BuildTags:      
	I0930 11:46:30.107140   45440 command_runner.go:130] >   containers_image_ostree_stub
	I0930 11:46:30.107162   45440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 11:46:30.107166   45440 command_runner.go:130] >   btrfs_noversion
	I0930 11:46:30.107171   45440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 11:46:30.107175   45440 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 11:46:30.107179   45440 command_runner.go:130] >   seccomp
	I0930 11:46:30.107188   45440 command_runner.go:130] > LDFlags:          unknown
	I0930 11:46:30.107194   45440 command_runner.go:130] > SeccompEnabled:   true
	I0930 11:46:30.107198   45440 command_runner.go:130] > AppArmorEnabled:  false
	I0930 11:46:30.108525   45440 ssh_runner.go:195] Run: crio --version
	I0930 11:46:30.137467   45440 command_runner.go:130] > crio version 1.29.1
	I0930 11:46:30.137489   45440 command_runner.go:130] > Version:        1.29.1
	I0930 11:46:30.137495   45440 command_runner.go:130] > GitCommit:      unknown
	I0930 11:46:30.137499   45440 command_runner.go:130] > GitCommitDate:  unknown
	I0930 11:46:30.137503   45440 command_runner.go:130] > GitTreeState:   clean
	I0930 11:46:30.137509   45440 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0930 11:46:30.137513   45440 command_runner.go:130] > GoVersion:      go1.21.6
	I0930 11:46:30.137516   45440 command_runner.go:130] > Compiler:       gc
	I0930 11:46:30.137521   45440 command_runner.go:130] > Platform:       linux/amd64
	I0930 11:46:30.137525   45440 command_runner.go:130] > Linkmode:       dynamic
	I0930 11:46:30.137535   45440 command_runner.go:130] > BuildTags:      
	I0930 11:46:30.137539   45440 command_runner.go:130] >   containers_image_ostree_stub
	I0930 11:46:30.137543   45440 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0930 11:46:30.137547   45440 command_runner.go:130] >   btrfs_noversion
	I0930 11:46:30.137552   45440 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0930 11:46:30.137556   45440 command_runner.go:130] >   libdm_no_deferred_remove
	I0930 11:46:30.137559   45440 command_runner.go:130] >   seccomp
	I0930 11:46:30.137563   45440 command_runner.go:130] > LDFlags:          unknown
	I0930 11:46:30.137598   45440 command_runner.go:130] > SeccompEnabled:   true
	I0930 11:46:30.137607   45440 command_runner.go:130] > AppArmorEnabled:  false
	I0930 11:46:30.139671   45440 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 11:46:30.141009   45440 main.go:141] libmachine: (multinode-457103) Calling .GetIP
	I0930 11:46:30.143641   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:30.143980   45440 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:46:30.144013   45440 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:46:30.144204   45440 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:46:30.148767   45440 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0930 11:46:30.148883   45440 kubeadm.go:883] updating cluster {Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:46:30.149017   45440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 11:46:30.149084   45440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:46:30.191423   45440 command_runner.go:130] > {
	I0930 11:46:30.191452   45440 command_runner.go:130] >   "images": [
	I0930 11:46:30.191459   45440 command_runner.go:130] >     {
	I0930 11:46:30.191473   45440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 11:46:30.191481   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191490   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 11:46:30.191496   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191502   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191515   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 11:46:30.191530   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 11:46:30.191538   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191551   45440 command_runner.go:130] >       "size": "87190579",
	I0930 11:46:30.191558   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191564   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.191571   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191578   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191584   45440 command_runner.go:130] >     },
	I0930 11:46:30.191590   45440 command_runner.go:130] >     {
	I0930 11:46:30.191600   45440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 11:46:30.191607   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191614   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 11:46:30.191617   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191622   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191629   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 11:46:30.191636   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 11:46:30.191641   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191645   45440 command_runner.go:130] >       "size": "1363676",
	I0930 11:46:30.191650   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191663   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.191670   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191678   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191684   45440 command_runner.go:130] >     },
	I0930 11:46:30.191690   45440 command_runner.go:130] >     {
	I0930 11:46:30.191700   45440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 11:46:30.191709   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191717   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 11:46:30.191725   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191731   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191745   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 11:46:30.191758   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 11:46:30.191767   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191774   45440 command_runner.go:130] >       "size": "31470524",
	I0930 11:46:30.191782   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191792   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.191799   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191805   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191812   45440 command_runner.go:130] >     },
	I0930 11:46:30.191817   45440 command_runner.go:130] >     {
	I0930 11:46:30.191831   45440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 11:46:30.191840   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191851   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 11:46:30.191862   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191871   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191886   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 11:46:30.191902   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 11:46:30.191909   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191913   45440 command_runner.go:130] >       "size": "63273227",
	I0930 11:46:30.191917   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.191923   45440 command_runner.go:130] >       "username": "nonroot",
	I0930 11:46:30.191927   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.191935   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.191939   45440 command_runner.go:130] >     },
	I0930 11:46:30.191943   45440 command_runner.go:130] >     {
	I0930 11:46:30.191948   45440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 11:46:30.191955   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.191960   45440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 11:46:30.191965   45440 command_runner.go:130] >       ],
	I0930 11:46:30.191970   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.191979   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 11:46:30.192020   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 11:46:30.192030   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192035   45440 command_runner.go:130] >       "size": "149009664",
	I0930 11:46:30.192038   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192043   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192046   45440 command_runner.go:130] >       },
	I0930 11:46:30.192050   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192055   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192059   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192063   45440 command_runner.go:130] >     },
	I0930 11:46:30.192068   45440 command_runner.go:130] >     {
	I0930 11:46:30.192074   45440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 11:46:30.192080   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192085   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 11:46:30.192091   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192095   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192104   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 11:46:30.192113   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 11:46:30.192119   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192124   45440 command_runner.go:130] >       "size": "95237600",
	I0930 11:46:30.192129   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192133   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192138   45440 command_runner.go:130] >       },
	I0930 11:46:30.192143   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192152   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192160   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192168   45440 command_runner.go:130] >     },
	I0930 11:46:30.192177   45440 command_runner.go:130] >     {
	I0930 11:46:30.192189   45440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 11:46:30.192199   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192207   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 11:46:30.192215   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192221   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192236   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 11:46:30.192250   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 11:46:30.192256   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192260   45440 command_runner.go:130] >       "size": "89437508",
	I0930 11:46:30.192266   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192270   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192276   45440 command_runner.go:130] >       },
	I0930 11:46:30.192280   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192286   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192290   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192298   45440 command_runner.go:130] >     },
	I0930 11:46:30.192302   45440 command_runner.go:130] >     {
	I0930 11:46:30.192310   45440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 11:46:30.192316   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192321   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 11:46:30.192327   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192331   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192348   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 11:46:30.192357   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 11:46:30.192363   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192367   45440 command_runner.go:130] >       "size": "92733849",
	I0930 11:46:30.192373   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.192377   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192383   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192387   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192390   45440 command_runner.go:130] >     },
	I0930 11:46:30.192393   45440 command_runner.go:130] >     {
	I0930 11:46:30.192399   45440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 11:46:30.192402   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192407   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 11:46:30.192410   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192413   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192421   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 11:46:30.192429   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 11:46:30.192432   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192436   45440 command_runner.go:130] >       "size": "68420934",
	I0930 11:46:30.192439   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192443   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.192446   45440 command_runner.go:130] >       },
	I0930 11:46:30.192450   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192454   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192458   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.192461   45440 command_runner.go:130] >     },
	I0930 11:46:30.192464   45440 command_runner.go:130] >     {
	I0930 11:46:30.192470   45440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 11:46:30.192473   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.192477   45440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 11:46:30.192481   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192484   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.192490   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 11:46:30.192497   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 11:46:30.192502   45440 command_runner.go:130] >       ],
	I0930 11:46:30.192506   45440 command_runner.go:130] >       "size": "742080",
	I0930 11:46:30.192512   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.192516   45440 command_runner.go:130] >         "value": "65535"
	I0930 11:46:30.192522   45440 command_runner.go:130] >       },
	I0930 11:46:30.192525   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.192531   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.192535   45440 command_runner.go:130] >       "pinned": true
	I0930 11:46:30.192540   45440 command_runner.go:130] >     }
	I0930 11:46:30.192549   45440 command_runner.go:130] >   ]
	I0930 11:46:30.192554   45440 command_runner.go:130] > }
	I0930 11:46:30.192714   45440 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:46:30.192724   45440 crio.go:433] Images already preloaded, skipping extraction
	I0930 11:46:30.192764   45440 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:46:30.227616   45440 command_runner.go:130] > {
	I0930 11:46:30.227646   45440 command_runner.go:130] >   "images": [
	I0930 11:46:30.227651   45440 command_runner.go:130] >     {
	I0930 11:46:30.227663   45440 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0930 11:46:30.227670   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.227678   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0930 11:46:30.227682   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227687   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.227699   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0930 11:46:30.227710   45440 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0930 11:46:30.227717   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227724   45440 command_runner.go:130] >       "size": "87190579",
	I0930 11:46:30.227732   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.227743   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.227754   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.227760   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.227767   45440 command_runner.go:130] >     },
	I0930 11:46:30.227773   45440 command_runner.go:130] >     {
	I0930 11:46:30.227784   45440 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0930 11:46:30.227793   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.227801   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0930 11:46:30.227807   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227814   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.227827   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0930 11:46:30.227840   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0930 11:46:30.227849   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227857   45440 command_runner.go:130] >       "size": "1363676",
	I0930 11:46:30.227866   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.227880   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.227890   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.227897   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.227903   45440 command_runner.go:130] >     },
	I0930 11:46:30.227910   45440 command_runner.go:130] >     {
	I0930 11:46:30.227920   45440 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0930 11:46:30.227929   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.227939   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0930 11:46:30.227948   45440 command_runner.go:130] >       ],
	I0930 11:46:30.227955   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.227971   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0930 11:46:30.227987   45440 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0930 11:46:30.227995   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228003   45440 command_runner.go:130] >       "size": "31470524",
	I0930 11:46:30.228013   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.228021   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228031   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228039   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228047   45440 command_runner.go:130] >     },
	I0930 11:46:30.228054   45440 command_runner.go:130] >     {
	I0930 11:46:30.228068   45440 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0930 11:46:30.228081   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228095   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0930 11:46:30.228104   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228111   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228126   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0930 11:46:30.228146   45440 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0930 11:46:30.228155   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228163   45440 command_runner.go:130] >       "size": "63273227",
	I0930 11:46:30.228178   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.228187   45440 command_runner.go:130] >       "username": "nonroot",
	I0930 11:46:30.228201   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228210   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228217   45440 command_runner.go:130] >     },
	I0930 11:46:30.228225   45440 command_runner.go:130] >     {
	I0930 11:46:30.228236   45440 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0930 11:46:30.228245   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228253   45440 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0930 11:46:30.228263   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228271   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228286   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0930 11:46:30.228300   45440 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0930 11:46:30.228309   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228317   45440 command_runner.go:130] >       "size": "149009664",
	I0930 11:46:30.228325   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228333   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228341   45440 command_runner.go:130] >       },
	I0930 11:46:30.228349   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228357   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228365   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228372   45440 command_runner.go:130] >     },
	I0930 11:46:30.228378   45440 command_runner.go:130] >     {
	I0930 11:46:30.228406   45440 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0930 11:46:30.228414   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228422   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0930 11:46:30.228429   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228438   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228452   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0930 11:46:30.228465   45440 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0930 11:46:30.228473   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228481   45440 command_runner.go:130] >       "size": "95237600",
	I0930 11:46:30.228491   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228500   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228507   45440 command_runner.go:130] >       },
	I0930 11:46:30.228515   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228525   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228534   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228541   45440 command_runner.go:130] >     },
	I0930 11:46:30.228548   45440 command_runner.go:130] >     {
	I0930 11:46:30.228561   45440 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0930 11:46:30.228570   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228579   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0930 11:46:30.228589   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228598   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228617   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0930 11:46:30.228636   45440 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0930 11:46:30.228645   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228653   45440 command_runner.go:130] >       "size": "89437508",
	I0930 11:46:30.228662   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228668   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228678   45440 command_runner.go:130] >       },
	I0930 11:46:30.228686   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228695   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228702   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228710   45440 command_runner.go:130] >     },
	I0930 11:46:30.228717   45440 command_runner.go:130] >     {
	I0930 11:46:30.228731   45440 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0930 11:46:30.228740   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228747   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0930 11:46:30.228752   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228758   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228778   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0930 11:46:30.228794   45440 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0930 11:46:30.228802   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228810   45440 command_runner.go:130] >       "size": "92733849",
	I0930 11:46:30.228819   45440 command_runner.go:130] >       "uid": null,
	I0930 11:46:30.228827   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228836   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.228844   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.228852   45440 command_runner.go:130] >     },
	I0930 11:46:30.228859   45440 command_runner.go:130] >     {
	I0930 11:46:30.228871   45440 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0930 11:46:30.228880   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.228889   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0930 11:46:30.228899   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228908   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.228923   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0930 11:46:30.228938   45440 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0930 11:46:30.228948   45440 command_runner.go:130] >       ],
	I0930 11:46:30.228956   45440 command_runner.go:130] >       "size": "68420934",
	I0930 11:46:30.228965   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.228973   45440 command_runner.go:130] >         "value": "0"
	I0930 11:46:30.228981   45440 command_runner.go:130] >       },
	I0930 11:46:30.228988   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.228998   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.229007   45440 command_runner.go:130] >       "pinned": false
	I0930 11:46:30.229015   45440 command_runner.go:130] >     },
	I0930 11:46:30.229021   45440 command_runner.go:130] >     {
	I0930 11:46:30.229035   45440 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0930 11:46:30.229044   45440 command_runner.go:130] >       "repoTags": [
	I0930 11:46:30.229066   45440 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0930 11:46:30.229075   45440 command_runner.go:130] >       ],
	I0930 11:46:30.229083   45440 command_runner.go:130] >       "repoDigests": [
	I0930 11:46:30.229130   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0930 11:46:30.229148   45440 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0930 11:46:30.229155   45440 command_runner.go:130] >       ],
	I0930 11:46:30.229165   45440 command_runner.go:130] >       "size": "742080",
	I0930 11:46:30.229172   45440 command_runner.go:130] >       "uid": {
	I0930 11:46:30.229181   45440 command_runner.go:130] >         "value": "65535"
	I0930 11:46:30.229190   45440 command_runner.go:130] >       },
	I0930 11:46:30.229214   45440 command_runner.go:130] >       "username": "",
	I0930 11:46:30.229224   45440 command_runner.go:130] >       "spec": null,
	I0930 11:46:30.229231   45440 command_runner.go:130] >       "pinned": true
	I0930 11:46:30.229239   45440 command_runner.go:130] >     }
	I0930 11:46:30.229245   45440 command_runner.go:130] >   ]
	I0930 11:46:30.229253   45440 command_runner.go:130] > }
	I0930 11:46:30.229374   45440 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 11:46:30.229385   45440 cache_images.go:84] Images are preloaded, skipping loading
	I0930 11:46:30.229397   45440 kubeadm.go:934] updating node { 192.168.39.219 8443 v1.31.1 crio true true} ...
	I0930 11:46:30.229511   45440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-457103 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:46:30.229588   45440 ssh_runner.go:195] Run: crio config
	I0930 11:46:30.267837   45440 command_runner.go:130] ! time="2024-09-30 11:46:30.242675717Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0930 11:46:30.273770   45440 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0930 11:46:30.281119   45440 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0930 11:46:30.281150   45440 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0930 11:46:30.281160   45440 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0930 11:46:30.281166   45440 command_runner.go:130] > #
	I0930 11:46:30.281177   45440 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0930 11:46:30.281186   45440 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0930 11:46:30.281195   45440 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0930 11:46:30.281208   45440 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0930 11:46:30.281218   45440 command_runner.go:130] > # reload'.
	I0930 11:46:30.281228   45440 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0930 11:46:30.281241   45440 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0930 11:46:30.281252   45440 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0930 11:46:30.281265   45440 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0930 11:46:30.281273   45440 command_runner.go:130] > [crio]
	I0930 11:46:30.281282   45440 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0930 11:46:30.281293   45440 command_runner.go:130] > # containers images, in this directory.
	I0930 11:46:30.281308   45440 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0930 11:46:30.281324   45440 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0930 11:46:30.281335   45440 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0930 11:46:30.281347   45440 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0930 11:46:30.281354   45440 command_runner.go:130] > # imagestore = ""
	I0930 11:46:30.281360   45440 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0930 11:46:30.281369   45440 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0930 11:46:30.281376   45440 command_runner.go:130] > storage_driver = "overlay"
	I0930 11:46:30.281381   45440 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0930 11:46:30.281389   45440 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0930 11:46:30.281395   45440 command_runner.go:130] > storage_option = [
	I0930 11:46:30.281400   45440 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0930 11:46:30.281405   45440 command_runner.go:130] > ]
	I0930 11:46:30.281412   45440 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0930 11:46:30.281420   45440 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0930 11:46:30.281429   45440 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0930 11:46:30.281436   45440 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0930 11:46:30.281442   45440 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0930 11:46:30.281448   45440 command_runner.go:130] > # always happen on a node reboot
	I0930 11:46:30.281453   45440 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0930 11:46:30.281463   45440 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0930 11:46:30.281471   45440 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0930 11:46:30.281478   45440 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0930 11:46:30.281483   45440 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0930 11:46:30.281493   45440 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0930 11:46:30.281502   45440 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0930 11:46:30.281508   45440 command_runner.go:130] > # internal_wipe = true
	I0930 11:46:30.281516   45440 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0930 11:46:30.281522   45440 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0930 11:46:30.281527   45440 command_runner.go:130] > # internal_repair = false
	I0930 11:46:30.281532   45440 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0930 11:46:30.281539   45440 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0930 11:46:30.281546   45440 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0930 11:46:30.281554   45440 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0930 11:46:30.281560   45440 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0930 11:46:30.281566   45440 command_runner.go:130] > [crio.api]
	I0930 11:46:30.281572   45440 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0930 11:46:30.281579   45440 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0930 11:46:30.281584   45440 command_runner.go:130] > # IP address on which the stream server will listen.
	I0930 11:46:30.281590   45440 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0930 11:46:30.281597   45440 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0930 11:46:30.281604   45440 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0930 11:46:30.281607   45440 command_runner.go:130] > # stream_port = "0"
	I0930 11:46:30.281628   45440 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0930 11:46:30.281635   45440 command_runner.go:130] > # stream_enable_tls = false
	I0930 11:46:30.281647   45440 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0930 11:46:30.281651   45440 command_runner.go:130] > # stream_idle_timeout = ""
	I0930 11:46:30.281660   45440 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0930 11:46:30.281667   45440 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0930 11:46:30.281673   45440 command_runner.go:130] > # minutes.
	I0930 11:46:30.281677   45440 command_runner.go:130] > # stream_tls_cert = ""
	I0930 11:46:30.281685   45440 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0930 11:46:30.281696   45440 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0930 11:46:30.281702   45440 command_runner.go:130] > # stream_tls_key = ""
	I0930 11:46:30.281708   45440 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0930 11:46:30.281716   45440 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0930 11:46:30.281733   45440 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0930 11:46:30.281739   45440 command_runner.go:130] > # stream_tls_ca = ""
	I0930 11:46:30.281747   45440 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 11:46:30.281754   45440 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0930 11:46:30.281760   45440 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0930 11:46:30.281767   45440 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0930 11:46:30.281773   45440 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0930 11:46:30.281782   45440 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0930 11:46:30.281788   45440 command_runner.go:130] > [crio.runtime]
	I0930 11:46:30.281796   45440 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0930 11:46:30.281804   45440 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0930 11:46:30.281808   45440 command_runner.go:130] > # "nofile=1024:2048"
	I0930 11:46:30.281814   45440 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0930 11:46:30.281820   45440 command_runner.go:130] > # default_ulimits = [
	I0930 11:46:30.281824   45440 command_runner.go:130] > # ]
	I0930 11:46:30.281832   45440 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0930 11:46:30.281837   45440 command_runner.go:130] > # no_pivot = false
	I0930 11:46:30.281842   45440 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0930 11:46:30.281850   45440 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0930 11:46:30.281856   45440 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0930 11:46:30.281863   45440 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0930 11:46:30.281868   45440 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0930 11:46:30.281876   45440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 11:46:30.281883   45440 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0930 11:46:30.281887   45440 command_runner.go:130] > # Cgroup setting for conmon
	I0930 11:46:30.281897   45440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0930 11:46:30.281904   45440 command_runner.go:130] > conmon_cgroup = "pod"
	I0930 11:46:30.281910   45440 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0930 11:46:30.281917   45440 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0930 11:46:30.281924   45440 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0930 11:46:30.281930   45440 command_runner.go:130] > conmon_env = [
	I0930 11:46:30.281936   45440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 11:46:30.281941   45440 command_runner.go:130] > ]
	I0930 11:46:30.281946   45440 command_runner.go:130] > # Additional environment variables to set for all the
	I0930 11:46:30.281951   45440 command_runner.go:130] > # containers. These are overridden if set in the
	I0930 11:46:30.281959   45440 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0930 11:46:30.281963   45440 command_runner.go:130] > # default_env = [
	I0930 11:46:30.281968   45440 command_runner.go:130] > # ]
	I0930 11:46:30.281974   45440 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0930 11:46:30.281983   45440 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0930 11:46:30.281988   45440 command_runner.go:130] > # selinux = false
	I0930 11:46:30.281994   45440 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0930 11:46:30.282003   45440 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0930 11:46:30.282010   45440 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0930 11:46:30.282015   45440 command_runner.go:130] > # seccomp_profile = ""
	I0930 11:46:30.282022   45440 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0930 11:46:30.282040   45440 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0930 11:46:30.282048   45440 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0930 11:46:30.282055   45440 command_runner.go:130] > # which might increase security.
	I0930 11:46:30.282059   45440 command_runner.go:130] > # This option is currently deprecated,
	I0930 11:46:30.282067   45440 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0930 11:46:30.282072   45440 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0930 11:46:30.282079   45440 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0930 11:46:30.282087   45440 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0930 11:46:30.282096   45440 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0930 11:46:30.282104   45440 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0930 11:46:30.282112   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.282119   45440 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0930 11:46:30.282125   45440 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0930 11:46:30.282131   45440 command_runner.go:130] > # the cgroup blockio controller.
	I0930 11:46:30.282136   45440 command_runner.go:130] > # blockio_config_file = ""
	I0930 11:46:30.282144   45440 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0930 11:46:30.282147   45440 command_runner.go:130] > # blockio parameters.
	I0930 11:46:30.282156   45440 command_runner.go:130] > # blockio_reload = false
	I0930 11:46:30.282166   45440 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0930 11:46:30.282176   45440 command_runner.go:130] > # irqbalance daemon.
	I0930 11:46:30.282183   45440 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0930 11:46:30.282194   45440 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0930 11:46:30.282208   45440 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0930 11:46:30.282220   45440 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0930 11:46:30.282232   45440 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0930 11:46:30.282246   45440 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0930 11:46:30.282254   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.282261   45440 command_runner.go:130] > # rdt_config_file = ""
	I0930 11:46:30.282267   45440 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0930 11:46:30.282275   45440 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0930 11:46:30.282298   45440 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0930 11:46:30.282305   45440 command_runner.go:130] > # separate_pull_cgroup = ""
	I0930 11:46:30.282311   45440 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0930 11:46:30.282319   45440 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0930 11:46:30.282326   45440 command_runner.go:130] > # will be added.
	I0930 11:46:30.282330   45440 command_runner.go:130] > # default_capabilities = [
	I0930 11:46:30.282336   45440 command_runner.go:130] > # 	"CHOWN",
	I0930 11:46:30.282340   45440 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0930 11:46:30.282344   45440 command_runner.go:130] > # 	"FSETID",
	I0930 11:46:30.282349   45440 command_runner.go:130] > # 	"FOWNER",
	I0930 11:46:30.282353   45440 command_runner.go:130] > # 	"SETGID",
	I0930 11:46:30.282359   45440 command_runner.go:130] > # 	"SETUID",
	I0930 11:46:30.282362   45440 command_runner.go:130] > # 	"SETPCAP",
	I0930 11:46:30.282368   45440 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0930 11:46:30.282373   45440 command_runner.go:130] > # 	"KILL",
	I0930 11:46:30.282379   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282387   45440 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0930 11:46:30.282395   45440 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0930 11:46:30.282403   45440 command_runner.go:130] > # add_inheritable_capabilities = false
	I0930 11:46:30.282409   45440 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0930 11:46:30.282417   45440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 11:46:30.282423   45440 command_runner.go:130] > default_sysctls = [
	I0930 11:46:30.282428   45440 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0930 11:46:30.282433   45440 command_runner.go:130] > ]
	I0930 11:46:30.282438   45440 command_runner.go:130] > # List of devices on the host that a
	I0930 11:46:30.282447   45440 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0930 11:46:30.282453   45440 command_runner.go:130] > # allowed_devices = [
	I0930 11:46:30.282457   45440 command_runner.go:130] > # 	"/dev/fuse",
	I0930 11:46:30.282462   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282467   45440 command_runner.go:130] > # List of additional devices. specified as
	I0930 11:46:30.282476   45440 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0930 11:46:30.282484   45440 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0930 11:46:30.282489   45440 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0930 11:46:30.282495   45440 command_runner.go:130] > # additional_devices = [
	I0930 11:46:30.282498   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282504   45440 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0930 11:46:30.282507   45440 command_runner.go:130] > # cdi_spec_dirs = [
	I0930 11:46:30.282513   45440 command_runner.go:130] > # 	"/etc/cdi",
	I0930 11:46:30.282517   45440 command_runner.go:130] > # 	"/var/run/cdi",
	I0930 11:46:30.282520   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282528   45440 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0930 11:46:30.282534   45440 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0930 11:46:30.282540   45440 command_runner.go:130] > # Defaults to false.
	I0930 11:46:30.282547   45440 command_runner.go:130] > # device_ownership_from_security_context = false
	I0930 11:46:30.282556   45440 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0930 11:46:30.282564   45440 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0930 11:46:30.282570   45440 command_runner.go:130] > # hooks_dir = [
	I0930 11:46:30.282574   45440 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0930 11:46:30.282581   45440 command_runner.go:130] > # ]
	I0930 11:46:30.282587   45440 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0930 11:46:30.282596   45440 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0930 11:46:30.282603   45440 command_runner.go:130] > # its default mounts from the following two files:
	I0930 11:46:30.282608   45440 command_runner.go:130] > #
	I0930 11:46:30.282614   45440 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0930 11:46:30.282623   45440 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0930 11:46:30.282631   45440 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0930 11:46:30.282634   45440 command_runner.go:130] > #
	I0930 11:46:30.282642   45440 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0930 11:46:30.282651   45440 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0930 11:46:30.282659   45440 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0930 11:46:30.282666   45440 command_runner.go:130] > #      only add mounts it finds in this file.
	I0930 11:46:30.282669   45440 command_runner.go:130] > #
	I0930 11:46:30.282674   45440 command_runner.go:130] > # default_mounts_file = ""
	I0930 11:46:30.282681   45440 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0930 11:46:30.282688   45440 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0930 11:46:30.282694   45440 command_runner.go:130] > pids_limit = 1024
	I0930 11:46:30.282701   45440 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0930 11:46:30.282709   45440 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0930 11:46:30.282718   45440 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0930 11:46:30.282726   45440 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0930 11:46:30.282732   45440 command_runner.go:130] > # log_size_max = -1
	I0930 11:46:30.282739   45440 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0930 11:46:30.282745   45440 command_runner.go:130] > # log_to_journald = false
	I0930 11:46:30.282751   45440 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0930 11:46:30.282758   45440 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0930 11:46:30.282763   45440 command_runner.go:130] > # Path to directory for container attach sockets.
	I0930 11:46:30.282770   45440 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0930 11:46:30.282775   45440 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0930 11:46:30.282781   45440 command_runner.go:130] > # bind_mount_prefix = ""
	I0930 11:46:30.282786   45440 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0930 11:46:30.282792   45440 command_runner.go:130] > # read_only = false
	I0930 11:46:30.282799   45440 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0930 11:46:30.282808   45440 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0930 11:46:30.282812   45440 command_runner.go:130] > # live configuration reload.
	I0930 11:46:30.282818   45440 command_runner.go:130] > # log_level = "info"
	I0930 11:46:30.282824   45440 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0930 11:46:30.282831   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.282835   45440 command_runner.go:130] > # log_filter = ""
	I0930 11:46:30.282843   45440 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0930 11:46:30.282850   45440 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0930 11:46:30.282856   45440 command_runner.go:130] > # separated by comma.
	I0930 11:46:30.282863   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282870   45440 command_runner.go:130] > # uid_mappings = ""
	I0930 11:46:30.282875   45440 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0930 11:46:30.282883   45440 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0930 11:46:30.282889   45440 command_runner.go:130] > # separated by comma.
	I0930 11:46:30.282897   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282903   45440 command_runner.go:130] > # gid_mappings = ""
	I0930 11:46:30.282910   45440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0930 11:46:30.282918   45440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 11:46:30.282924   45440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 11:46:30.282934   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282940   45440 command_runner.go:130] > # minimum_mappable_uid = -1
	I0930 11:46:30.282945   45440 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0930 11:46:30.282953   45440 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0930 11:46:30.282961   45440 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0930 11:46:30.282968   45440 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0930 11:46:30.282974   45440 command_runner.go:130] > # minimum_mappable_gid = -1
	I0930 11:46:30.282980   45440 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0930 11:46:30.282988   45440 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0930 11:46:30.282996   45440 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0930 11:46:30.283002   45440 command_runner.go:130] > # ctr_stop_timeout = 30
	I0930 11:46:30.283008   45440 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0930 11:46:30.283016   45440 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0930 11:46:30.283021   45440 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0930 11:46:30.283028   45440 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0930 11:46:30.283032   45440 command_runner.go:130] > drop_infra_ctr = false
	I0930 11:46:30.283040   45440 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0930 11:46:30.283048   45440 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0930 11:46:30.283055   45440 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0930 11:46:30.283061   45440 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0930 11:46:30.283068   45440 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0930 11:46:30.283076   45440 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0930 11:46:30.283083   45440 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0930 11:46:30.283090   45440 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0930 11:46:30.283094   45440 command_runner.go:130] > # shared_cpuset = ""
	I0930 11:46:30.283102   45440 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0930 11:46:30.283109   45440 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0930 11:46:30.283113   45440 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0930 11:46:30.283122   45440 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0930 11:46:30.283128   45440 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0930 11:46:30.283133   45440 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0930 11:46:30.283143   45440 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0930 11:46:30.283151   45440 command_runner.go:130] > # enable_criu_support = false
	I0930 11:46:30.283159   45440 command_runner.go:130] > # Enable/disable the generation of the container,
	I0930 11:46:30.283171   45440 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0930 11:46:30.283180   45440 command_runner.go:130] > # enable_pod_events = false
	I0930 11:46:30.283190   45440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 11:46:30.283202   45440 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0930 11:46:30.283213   45440 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0930 11:46:30.283221   45440 command_runner.go:130] > # default_runtime = "runc"
	I0930 11:46:30.283229   45440 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0930 11:46:30.283240   45440 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0930 11:46:30.283251   45440 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0930 11:46:30.283259   45440 command_runner.go:130] > # creation as a file is not desired either.
	I0930 11:46:30.283267   45440 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0930 11:46:30.283301   45440 command_runner.go:130] > # the hostname is being managed dynamically.
	I0930 11:46:30.283314   45440 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0930 11:46:30.283319   45440 command_runner.go:130] > # ]
	I0930 11:46:30.283326   45440 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0930 11:46:30.283334   45440 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0930 11:46:30.283340   45440 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0930 11:46:30.283348   45440 command_runner.go:130] > # Each entry in the table should follow the format:
	I0930 11:46:30.283352   45440 command_runner.go:130] > #
	I0930 11:46:30.283361   45440 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0930 11:46:30.283368   45440 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0930 11:46:30.283400   45440 command_runner.go:130] > # runtime_type = "oci"
	I0930 11:46:30.283408   45440 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0930 11:46:30.283413   45440 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0930 11:46:30.283420   45440 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0930 11:46:30.283424   45440 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0930 11:46:30.283429   45440 command_runner.go:130] > # monitor_env = []
	I0930 11:46:30.283436   45440 command_runner.go:130] > # privileged_without_host_devices = false
	I0930 11:46:30.283440   45440 command_runner.go:130] > # allowed_annotations = []
	I0930 11:46:30.283448   45440 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0930 11:46:30.283452   45440 command_runner.go:130] > # Where:
	I0930 11:46:30.283459   45440 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0930 11:46:30.283466   45440 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0930 11:46:30.283475   45440 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0930 11:46:30.283483   45440 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0930 11:46:30.283487   45440 command_runner.go:130] > #   in $PATH.
	I0930 11:46:30.283493   45440 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0930 11:46:30.283500   45440 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0930 11:46:30.283506   45440 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0930 11:46:30.283512   45440 command_runner.go:130] > #   state.
	I0930 11:46:30.283518   45440 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0930 11:46:30.283523   45440 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0930 11:46:30.283531   45440 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0930 11:46:30.283537   45440 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0930 11:46:30.283546   45440 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0930 11:46:30.283556   45440 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0930 11:46:30.283562   45440 command_runner.go:130] > #   The currently recognized values are:
	I0930 11:46:30.283569   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0930 11:46:30.283578   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0930 11:46:30.283586   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0930 11:46:30.283594   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0930 11:46:30.283605   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0930 11:46:30.283613   45440 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0930 11:46:30.283620   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0930 11:46:30.283628   45440 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0930 11:46:30.283634   45440 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0930 11:46:30.283642   45440 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0930 11:46:30.283649   45440 command_runner.go:130] > #   deprecated option "conmon".
	I0930 11:46:30.283655   45440 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0930 11:46:30.283662   45440 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0930 11:46:30.283668   45440 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0930 11:46:30.283675   45440 command_runner.go:130] > #   should be moved to the container's cgroup
	I0930 11:46:30.283681   45440 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0930 11:46:30.283688   45440 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0930 11:46:30.283694   45440 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0930 11:46:30.283701   45440 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0930 11:46:30.283704   45440 command_runner.go:130] > #
	I0930 11:46:30.283709   45440 command_runner.go:130] > # Using the seccomp notifier feature:
	I0930 11:46:30.283714   45440 command_runner.go:130] > #
	I0930 11:46:30.283720   45440 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0930 11:46:30.283731   45440 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0930 11:46:30.283736   45440 command_runner.go:130] > #
	I0930 11:46:30.283742   45440 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0930 11:46:30.283750   45440 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0930 11:46:30.283753   45440 command_runner.go:130] > #
	I0930 11:46:30.283759   45440 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0930 11:46:30.283764   45440 command_runner.go:130] > # feature.
	I0930 11:46:30.283768   45440 command_runner.go:130] > #
	I0930 11:46:30.283777   45440 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0930 11:46:30.283785   45440 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0930 11:46:30.283791   45440 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0930 11:46:30.283799   45440 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0930 11:46:30.283807   45440 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0930 11:46:30.283810   45440 command_runner.go:130] > #
	I0930 11:46:30.283816   45440 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0930 11:46:30.283824   45440 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0930 11:46:30.283829   45440 command_runner.go:130] > #
	I0930 11:46:30.283835   45440 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0930 11:46:30.283843   45440 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0930 11:46:30.283846   45440 command_runner.go:130] > #
	I0930 11:46:30.283852   45440 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0930 11:46:30.283860   45440 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0930 11:46:30.283866   45440 command_runner.go:130] > # limitation.
	I0930 11:46:30.283872   45440 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0930 11:46:30.283878   45440 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0930 11:46:30.283882   45440 command_runner.go:130] > runtime_type = "oci"
	I0930 11:46:30.283889   45440 command_runner.go:130] > runtime_root = "/run/runc"
	I0930 11:46:30.283893   45440 command_runner.go:130] > runtime_config_path = ""
	I0930 11:46:30.283899   45440 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0930 11:46:30.283903   45440 command_runner.go:130] > monitor_cgroup = "pod"
	I0930 11:46:30.283907   45440 command_runner.go:130] > monitor_exec_cgroup = ""
	I0930 11:46:30.283913   45440 command_runner.go:130] > monitor_env = [
	I0930 11:46:30.283918   45440 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0930 11:46:30.283923   45440 command_runner.go:130] > ]
	I0930 11:46:30.283927   45440 command_runner.go:130] > privileged_without_host_devices = false
	I0930 11:46:30.283936   45440 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0930 11:46:30.283943   45440 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0930 11:46:30.283950   45440 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0930 11:46:30.283959   45440 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0930 11:46:30.283968   45440 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0930 11:46:30.283976   45440 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0930 11:46:30.283985   45440 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0930 11:46:30.283995   45440 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0930 11:46:30.284001   45440 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0930 11:46:30.284010   45440 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0930 11:46:30.284016   45440 command_runner.go:130] > # Example:
	I0930 11:46:30.284020   45440 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0930 11:46:30.284027   45440 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0930 11:46:30.284031   45440 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0930 11:46:30.284038   45440 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0930 11:46:30.284042   45440 command_runner.go:130] > # cpuset = 0
	I0930 11:46:30.284048   45440 command_runner.go:130] > # cpushares = "0-1"
	I0930 11:46:30.284052   45440 command_runner.go:130] > # Where:
	I0930 11:46:30.284058   45440 command_runner.go:130] > # The workload name is workload-type.
	I0930 11:46:30.284065   45440 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0930 11:46:30.284072   45440 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0930 11:46:30.284077   45440 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0930 11:46:30.284087   45440 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0930 11:46:30.284095   45440 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0930 11:46:30.284102   45440 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0930 11:46:30.284108   45440 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0930 11:46:30.284115   45440 command_runner.go:130] > # Default value is set to true
	I0930 11:46:30.284120   45440 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0930 11:46:30.284127   45440 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0930 11:46:30.284132   45440 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0930 11:46:30.284138   45440 command_runner.go:130] > # Default value is set to 'false'
	I0930 11:46:30.284143   45440 command_runner.go:130] > # disable_hostport_mapping = false
	I0930 11:46:30.284153   45440 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0930 11:46:30.284160   45440 command_runner.go:130] > #
	I0930 11:46:30.284168   45440 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0930 11:46:30.284176   45440 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0930 11:46:30.284186   45440 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0930 11:46:30.284195   45440 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0930 11:46:30.284204   45440 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0930 11:46:30.284211   45440 command_runner.go:130] > [crio.image]
	I0930 11:46:30.284220   45440 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0930 11:46:30.284227   45440 command_runner.go:130] > # default_transport = "docker://"
	I0930 11:46:30.284236   45440 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0930 11:46:30.284245   45440 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0930 11:46:30.284250   45440 command_runner.go:130] > # global_auth_file = ""
	I0930 11:46:30.284257   45440 command_runner.go:130] > # The image used to instantiate infra containers.
	I0930 11:46:30.284262   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.284267   45440 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0930 11:46:30.284273   45440 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0930 11:46:30.284279   45440 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0930 11:46:30.284284   45440 command_runner.go:130] > # This option supports live configuration reload.
	I0930 11:46:30.284287   45440 command_runner.go:130] > # pause_image_auth_file = ""
	I0930 11:46:30.284297   45440 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0930 11:46:30.284303   45440 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0930 11:46:30.284309   45440 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0930 11:46:30.284314   45440 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0930 11:46:30.284318   45440 command_runner.go:130] > # pause_command = "/pause"
	I0930 11:46:30.284324   45440 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0930 11:46:30.284330   45440 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0930 11:46:30.284335   45440 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0930 11:46:30.284343   45440 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0930 11:46:30.284348   45440 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0930 11:46:30.284354   45440 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0930 11:46:30.284358   45440 command_runner.go:130] > # pinned_images = [
	I0930 11:46:30.284361   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284366   45440 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0930 11:46:30.284372   45440 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0930 11:46:30.284378   45440 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0930 11:46:30.284383   45440 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0930 11:46:30.284388   45440 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0930 11:46:30.284395   45440 command_runner.go:130] > # signature_policy = ""
	I0930 11:46:30.284400   45440 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0930 11:46:30.284410   45440 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0930 11:46:30.284418   45440 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0930 11:46:30.284426   45440 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0930 11:46:30.284435   45440 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0930 11:46:30.284442   45440 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0930 11:46:30.284448   45440 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0930 11:46:30.284456   45440 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0930 11:46:30.284462   45440 command_runner.go:130] > # changing them here.
	I0930 11:46:30.284466   45440 command_runner.go:130] > # insecure_registries = [
	I0930 11:46:30.284471   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284477   45440 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0930 11:46:30.284484   45440 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0930 11:46:30.284488   45440 command_runner.go:130] > # image_volumes = "mkdir"
	I0930 11:46:30.284494   45440 command_runner.go:130] > # Temporary directory to use for storing big files
	I0930 11:46:30.284499   45440 command_runner.go:130] > # big_files_temporary_dir = ""
	I0930 11:46:30.284505   45440 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0930 11:46:30.284510   45440 command_runner.go:130] > # CNI plugins.
	I0930 11:46:30.284514   45440 command_runner.go:130] > [crio.network]
	I0930 11:46:30.284520   45440 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0930 11:46:30.284527   45440 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0930 11:46:30.284532   45440 command_runner.go:130] > # cni_default_network = ""
	I0930 11:46:30.284539   45440 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0930 11:46:30.284544   45440 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0930 11:46:30.284553   45440 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0930 11:46:30.284558   45440 command_runner.go:130] > # plugin_dirs = [
	I0930 11:46:30.284562   45440 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0930 11:46:30.284568   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284574   45440 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0930 11:46:30.284580   45440 command_runner.go:130] > [crio.metrics]
	I0930 11:46:30.284584   45440 command_runner.go:130] > # Globally enable or disable metrics support.
	I0930 11:46:30.284590   45440 command_runner.go:130] > enable_metrics = true
	I0930 11:46:30.284595   45440 command_runner.go:130] > # Specify enabled metrics collectors.
	I0930 11:46:30.284599   45440 command_runner.go:130] > # Per default all metrics are enabled.
	I0930 11:46:30.284608   45440 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0930 11:46:30.284614   45440 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0930 11:46:30.284622   45440 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0930 11:46:30.284626   45440 command_runner.go:130] > # metrics_collectors = [
	I0930 11:46:30.284632   45440 command_runner.go:130] > # 	"operations",
	I0930 11:46:30.284636   45440 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0930 11:46:30.284640   45440 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0930 11:46:30.284646   45440 command_runner.go:130] > # 	"operations_errors",
	I0930 11:46:30.284651   45440 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0930 11:46:30.284657   45440 command_runner.go:130] > # 	"image_pulls_by_name",
	I0930 11:46:30.284662   45440 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0930 11:46:30.284668   45440 command_runner.go:130] > # 	"image_pulls_failures",
	I0930 11:46:30.284672   45440 command_runner.go:130] > # 	"image_pulls_successes",
	I0930 11:46:30.284679   45440 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0930 11:46:30.284683   45440 command_runner.go:130] > # 	"image_layer_reuse",
	I0930 11:46:30.284690   45440 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0930 11:46:30.284694   45440 command_runner.go:130] > # 	"containers_oom_total",
	I0930 11:46:30.284700   45440 command_runner.go:130] > # 	"containers_oom",
	I0930 11:46:30.284704   45440 command_runner.go:130] > # 	"processes_defunct",
	I0930 11:46:30.284710   45440 command_runner.go:130] > # 	"operations_total",
	I0930 11:46:30.284714   45440 command_runner.go:130] > # 	"operations_latency_seconds",
	I0930 11:46:30.284720   45440 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0930 11:46:30.284725   45440 command_runner.go:130] > # 	"operations_errors_total",
	I0930 11:46:30.284731   45440 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0930 11:46:30.284735   45440 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0930 11:46:30.284742   45440 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0930 11:46:30.284746   45440 command_runner.go:130] > # 	"image_pulls_success_total",
	I0930 11:46:30.284752   45440 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0930 11:46:30.284756   45440 command_runner.go:130] > # 	"containers_oom_count_total",
	I0930 11:46:30.284763   45440 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0930 11:46:30.284767   45440 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0930 11:46:30.284773   45440 command_runner.go:130] > # ]
	I0930 11:46:30.284778   45440 command_runner.go:130] > # The port on which the metrics server will listen.
	I0930 11:46:30.284783   45440 command_runner.go:130] > # metrics_port = 9090
	I0930 11:46:30.284790   45440 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0930 11:46:30.284794   45440 command_runner.go:130] > # metrics_socket = ""
	I0930 11:46:30.284802   45440 command_runner.go:130] > # The certificate for the secure metrics server.
	I0930 11:46:30.284808   45440 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0930 11:46:30.284816   45440 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0930 11:46:30.284823   45440 command_runner.go:130] > # certificate on any modification event.
	I0930 11:46:30.284827   45440 command_runner.go:130] > # metrics_cert = ""
	I0930 11:46:30.284834   45440 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0930 11:46:30.284838   45440 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0930 11:46:30.284844   45440 command_runner.go:130] > # metrics_key = ""
	I0930 11:46:30.284850   45440 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0930 11:46:30.284856   45440 command_runner.go:130] > [crio.tracing]
	I0930 11:46:30.284862   45440 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0930 11:46:30.284868   45440 command_runner.go:130] > # enable_tracing = false
	I0930 11:46:30.284873   45440 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0930 11:46:30.284878   45440 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0930 11:46:30.284886   45440 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0930 11:46:30.284894   45440 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0930 11:46:30.284898   45440 command_runner.go:130] > # CRI-O NRI configuration.
	I0930 11:46:30.284903   45440 command_runner.go:130] > [crio.nri]
	I0930 11:46:30.284908   45440 command_runner.go:130] > # Globally enable or disable NRI.
	I0930 11:46:30.284914   45440 command_runner.go:130] > # enable_nri = false
	I0930 11:46:30.284918   45440 command_runner.go:130] > # NRI socket to listen on.
	I0930 11:46:30.284925   45440 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0930 11:46:30.284929   45440 command_runner.go:130] > # NRI plugin directory to use.
	I0930 11:46:30.284936   45440 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0930 11:46:30.284940   45440 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0930 11:46:30.284947   45440 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0930 11:46:30.284952   45440 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0930 11:46:30.284958   45440 command_runner.go:130] > # nri_disable_connections = false
	I0930 11:46:30.284963   45440 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0930 11:46:30.284971   45440 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0930 11:46:30.284976   45440 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0930 11:46:30.284983   45440 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0930 11:46:30.284989   45440 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0930 11:46:30.284995   45440 command_runner.go:130] > [crio.stats]
	I0930 11:46:30.285001   45440 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0930 11:46:30.285008   45440 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0930 11:46:30.285012   45440 command_runner.go:130] > # stats_collection_period = 0
	I0930 11:46:30.285085   45440 cni.go:84] Creating CNI manager for ""
	I0930 11:46:30.285095   45440 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0930 11:46:30.285105   45440 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:46:30.285126   45440 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.219 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-457103 NodeName:multinode-457103 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:46:30.285278   45440 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.219
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-457103"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:46:30.285350   45440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 11:46:30.295631   45440 command_runner.go:130] > kubeadm
	I0930 11:46:30.295652   45440 command_runner.go:130] > kubectl
	I0930 11:46:30.295656   45440 command_runner.go:130] > kubelet
	I0930 11:46:30.295681   45440 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:46:30.295725   45440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 11:46:30.305927   45440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0930 11:46:30.324306   45440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:46:30.344271   45440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0930 11:46:30.363305   45440 ssh_runner.go:195] Run: grep 192.168.39.219	control-plane.minikube.internal$ /etc/hosts
	I0930 11:46:30.367795   45440 command_runner.go:130] > 192.168.39.219	control-plane.minikube.internal
	I0930 11:46:30.367871   45440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:46:30.526997   45440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:46:30.543048   45440 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103 for IP: 192.168.39.219
	I0930 11:46:30.543083   45440 certs.go:194] generating shared ca certs ...
	I0930 11:46:30.543105   45440 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:46:30.543279   45440 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:46:30.543339   45440 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:46:30.543353   45440 certs.go:256] generating profile certs ...
	I0930 11:46:30.543445   45440 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/client.key
	I0930 11:46:30.543521   45440 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.key.37ca6d7c
	I0930 11:46:30.543575   45440 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.key
	I0930 11:46:30.543591   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0930 11:46:30.543610   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0930 11:46:30.543629   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 11:46:30.543649   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0930 11:46:30.543668   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0930 11:46:30.543687   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0930 11:46:30.543706   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0930 11:46:30.543725   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0930 11:46:30.543791   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:46:30.543846   45440 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:46:30.543860   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:46:30.543901   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:46:30.543934   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:46:30.543966   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:46:30.544020   45440 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:46:30.544061   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.544081   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem -> /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.544100   45440 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.544713   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:46:30.573242   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:46:30.599704   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:46:30.625383   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:46:30.650839   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0930 11:46:30.678871   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 11:46:30.704951   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:46:30.731562   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/multinode-457103/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 11:46:30.758841   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:46:30.785798   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:46:30.813233   45440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:46:30.840267   45440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:46:30.858022   45440 ssh_runner.go:195] Run: openssl version
	I0930 11:46:30.864503   45440 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0930 11:46:30.864588   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:46:30.876155   45440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.881192   45440 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.881245   45440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.881314   45440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:46:30.887390   45440 command_runner.go:130] > 51391683
	I0930 11:46:30.887467   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:46:30.897407   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:46:30.909494   45440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.914418   45440 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.914456   45440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.914509   45440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:46:30.920739   45440 command_runner.go:130] > 3ec20f2e
	I0930 11:46:30.920822   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:46:30.931094   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:46:30.942903   45440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.947924   45440 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.948056   45440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.948118   45440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:46:30.954234   45440 command_runner.go:130] > b5213941
	I0930 11:46:30.954310   45440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:46:30.965078   45440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:46:30.970000   45440 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:46:30.970024   45440 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0930 11:46:30.970032   45440 command_runner.go:130] > Device: 253,1	Inode: 1054760     Links: 1
	I0930 11:46:30.970040   45440 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0930 11:46:30.970048   45440 command_runner.go:130] > Access: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970054   45440 command_runner.go:130] > Modify: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970061   45440 command_runner.go:130] > Change: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970068   45440 command_runner.go:130] >  Birth: 2024-09-30 11:39:49.134627133 +0000
	I0930 11:46:30.970130   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:46:30.976125   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.976215   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:46:30.981871   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.982012   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:46:30.987644   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.987714   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:46:30.993385   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.993465   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:46:30.999580   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:30.999658   45440 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:46:31.005530   45440 command_runner.go:130] > Certificate will not expire
	I0930 11:46:31.005612   45440 kubeadm.go:392] StartCluster: {Name:multinode-457103 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-457103 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.180 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.58 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:46:31.005720   45440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:46:31.005762   45440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:46:31.048293   45440 command_runner.go:130] > bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de
	I0930 11:46:31.048331   45440 command_runner.go:130] > b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa
	I0930 11:46:31.048338   45440 command_runner.go:130] > 1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332
	I0930 11:46:31.048345   45440 command_runner.go:130] > 27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e
	I0930 11:46:31.048353   45440 command_runner.go:130] > d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867
	I0930 11:46:31.048360   45440 command_runner.go:130] > 81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd
	I0930 11:46:31.048365   45440 command_runner.go:130] > 985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28
	I0930 11:46:31.048379   45440 command_runner.go:130] > 14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06
	I0930 11:46:31.048403   45440 cri.go:89] found id: "bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de"
	I0930 11:46:31.048414   45440 cri.go:89] found id: "b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa"
	I0930 11:46:31.048420   45440 cri.go:89] found id: "1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332"
	I0930 11:46:31.048437   45440 cri.go:89] found id: "27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e"
	I0930 11:46:31.048440   45440 cri.go:89] found id: "d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867"
	I0930 11:46:31.048444   45440 cri.go:89] found id: "81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd"
	I0930 11:46:31.048446   45440 cri.go:89] found id: "985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28"
	I0930 11:46:31.048450   45440 cri.go:89] found id: "14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06"
	I0930 11:46:31.048453   45440 cri.go:89] found id: ""
	I0930 11:46:31.048501   45440 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.769030204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697045769006145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf955e8d-92be-4db2-bd41-7fc5abdb85f6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.769617301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aefc205a-5ecb-479d-a366-4095ddb2b572 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.769700011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aefc205a-5ecb-479d-a366-4095ddb2b572 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.770035584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aefc205a-5ecb-479d-a366-4095ddb2b572 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.810672379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a228055f-6599-437f-a15b-6642f9ab402e name=/runtime.v1.RuntimeService/Version
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.810746309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a228055f-6599-437f-a15b-6642f9ab402e name=/runtime.v1.RuntimeService/Version
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.812294600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01d45c90-2917-451f-b455-abcbc72be623 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.812873920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697045812850628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01d45c90-2917-451f-b455-abcbc72be623 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.813472160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f555c784-a94c-4842-805d-7991f53e7ffd name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.813544376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f555c784-a94c-4842-805d-7991f53e7ffd name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.813914693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f555c784-a94c-4842-805d-7991f53e7ffd name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.857628813Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efc1c1ba-c241-4dc6-8c8f-a42c2d2ae88e name=/runtime.v1.RuntimeService/Version
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.857831003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efc1c1ba-c241-4dc6-8c8f-a42c2d2ae88e name=/runtime.v1.RuntimeService/Version
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.859052080Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=adf59f3b-05ff-4904-a4b1-da1451f7d2d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.859534897Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697045859511994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adf59f3b-05ff-4904-a4b1-da1451f7d2d2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.859966793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=814dada5-a29f-4809-bef7-10502c9421f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.860024426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=814dada5-a29f-4809-bef7-10502c9421f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.860483218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=814dada5-a29f-4809-bef7-10502c9421f0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.903553249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5a251a4c-9907-4ab5-b195-a8b27da5de4e name=/runtime.v1.RuntimeService/Version
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.903845015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5a251a4c-9907-4ab5-b195-a8b27da5de4e name=/runtime.v1.RuntimeService/Version
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.905050939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8f14bb5-30f0-4794-8fd1-3da7478ea8a1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.905520433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697045905496643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8f14bb5-30f0-4794-8fd1-3da7478ea8a1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.906098702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=527542d2-7249-4747-a3a9-9d538fc12081 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.906173742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=527542d2-7249-4747-a3a9-9d538fc12081 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:50:45 multinode-457103 crio[2743]: time="2024-09-30 11:50:45.906588072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18c91cea164eb9cb16ebc55a8e269e1f2cb9bba8ade65fd6970501075a6ab2c9,PodSandboxId:6b7fd835a17feb152f80cb2d800987a88326d494c079cf2df087120e040be83a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727696831993858488,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150,PodSandboxId:1acc75474f4789955eda9f3071a69c816ca0cf6d3d91e282a3ed39b45dda423c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727696798520217037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3,PodSandboxId:96352937ab74a1352a9abbf849cde180cb7cee2a6eeb7cad1270ca4a6f760cd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727696798437659820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e,PodSandboxId:040289e57fee2df5c6b3b79fe32d0af3959c213e1b311f6d305e06a4da8706e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727696798373883269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6-ae1242330075,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7a92f8bb933fc8f9020a3fec56bed865822abd9e57e60e905a2ed4ce43a4870,PodSandboxId:001b8db3f6eeec7f6c29f11fc948c0e47f55c6e06f23575811241060344e265e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727696798279192188,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd,PodSandboxId:a38d93dbf5689627e46271ed30305b1b2f0b08132ae9f2a2b9a9804b39042ac0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727696793387542625,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d,PodSandboxId:5d919108fb3ea901ce5952a29be9d45f5f66a17f4c16f23a33a0cc99def7801f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727696793403538182,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e,PodSandboxId:7b23262f7de985bfeb7cdd966a27dda350096c131e77e7beb8be2d40a3ad78d4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727696793368232762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65,PodSandboxId:47f0193bc86ee7ab6679896f9deb5b111cbb028e3b121438783527671e107854,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727696793333703233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04966fcb879e6def197e0982c29d29616b261d3a2e5c1603149ad37a8d7d22ab,PodSandboxId:a9d252ffc3694030f1d3aea53c9171a42a528b4c43277027e36c9fdcd0a8a0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727696472349349936,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-hwwdc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c81334ea-fd48-4e97-9e43-8bd50dabf0cb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de,PodSandboxId:4bbc7b88ac1b3f1d651d4ac4219fad44e57d51735badb66535722553f38bbb83,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727696418427051426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-cchmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f096551-b87c-4aca-9345-b054e1af235a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b031192f52a679ae8fee76b30dd6f63c8aa103d2c88e2c284808820fc487fa,PodSandboxId:6d2e9a6295dd79df0df9fd6f099f19264977403660699ea62f2c12280ce9cfa5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727696416877159436,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d3047bc0-15a5-4820-b5ff-4718909e1d7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332,PodSandboxId:941c0405e0fc60e229080b3f79d7d5658b96f1f6e1e9232263ebb2bc94732d76,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727696404904627623,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-8bjzm,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ccb25478-bf00-4afa-94a6-d1c0a2608112,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e,PodSandboxId:7d109891f0ca1353cf4f24e747e869f5e94c9b9a13d0b84d7f6ab337f1bd812b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727696404806112304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-77tjs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40654ea-0812-44b3-bff6
-ae1242330075,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867,PodSandboxId:e3c3ef7715258817a7903aec0ce6acd615f038e4f3feeab7a81be3534a5e2c82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727696394011861048,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f59a8bdcb8289b0082097a1f5b7b8fe1,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd,PodSandboxId:32d020b7d015ca8b2988b47ecf28c7a34d8a313cf1300f59bd0b2dc70b84e860,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727696393953301218,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3df031ba6c22d5394dc2ec28aa194c6,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28,PodSandboxId:f05549c8abcc828ca523a70d24dff471cd30dc422ff4831d7b6be98a7f98c3c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727696393918914988,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eb14e8f7463b06abba97c49965f63f,},Annotations:map[string]string{io.kubernetes.container.hash:
cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06,PodSandboxId:e410f8da196c6eee919f5ab4e9178d85710adefc65a7902427c32c238f56ad83,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727696393893967758,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-457103,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 764330db4fc0ab8999abdc9a8ebfe6ee,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=527542d2-7249-4747-a3a9-9d538fc12081 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	18c91cea164eb       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   6b7fd835a17fe       busybox-7dff88458-hwwdc
	e42d74aa167a2       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   1acc75474f478       kindnet-8bjzm
	4341d06ffd7db       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   96352937ab74a       coredns-7c65d6cfc9-cchmp
	7258510df4e37       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   040289e57fee2       kube-proxy-77tjs
	f7a92f8bb933f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   001b8db3f6eee       storage-provisioner
	eaf6af32d43e6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   5d919108fb3ea       kube-scheduler-multinode-457103
	92157ff23e9a4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   a38d93dbf5689       etcd-multinode-457103
	f90b4779f01dd       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   7b23262f7de98       kube-controller-manager-multinode-457103
	67c22defd31e2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   47f0193bc86ee       kube-apiserver-multinode-457103
	04966fcb879e6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a9d252ffc3694       busybox-7dff88458-hwwdc
	bee86c8246408       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   4bbc7b88ac1b3       coredns-7c65d6cfc9-cchmp
	b9b031192f52a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   6d2e9a6295dd7       storage-provisioner
	1692df8a76bd8       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   941c0405e0fc6       kindnet-8bjzm
	27d4e50ea8999       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   7d109891f0ca1       kube-proxy-77tjs
	d0c657343cf2c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   e3c3ef7715258       kube-scheduler-multinode-457103
	81d3a3b58452b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   32d020b7d015c       kube-apiserver-multinode-457103
	985558f9028b4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   f05549c8abcc8       etcd-multinode-457103
	14bdb366a11a6       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   e410f8da196c6       kube-controller-manager-multinode-457103
	
	
	==> coredns [4341d06ffd7dbc8c6b785eb39ed01983ce88bfdac176321287e015dd9e8446c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56525 - 40399 "HINFO IN 4520725143726543011.5835794679418722259. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02717377s
	
	
	==> coredns [bee86c82464086e390e89d28008ba06000d76780d6ef97c0846babdd9e99a6de] <==
	[INFO] 10.244.1.2:49166 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00214942s
	[INFO] 10.244.1.2:35848 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013174s
	[INFO] 10.244.1.2:43378 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009867s
	[INFO] 10.244.1.2:38560 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001643951s
	[INFO] 10.244.1.2:49873 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196008s
	[INFO] 10.244.1.2:57622 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101982s
	[INFO] 10.244.1.2:33177 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126401s
	[INFO] 10.244.0.3:49103 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001388s
	[INFO] 10.244.0.3:53416 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091022s
	[INFO] 10.244.0.3:59575 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118932s
	[INFO] 10.244.0.3:47749 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073111s
	[INFO] 10.244.1.2:39231 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140327s
	[INFO] 10.244.1.2:59236 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161937s
	[INFO] 10.244.1.2:56200 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102831s
	[INFO] 10.244.1.2:40944 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095915s
	[INFO] 10.244.0.3:58989 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120725s
	[INFO] 10.244.0.3:52719 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00030147s
	[INFO] 10.244.0.3:60944 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116043s
	[INFO] 10.244.0.3:58642 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001092s
	[INFO] 10.244.1.2:41523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238737s
	[INFO] 10.244.1.2:37343 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000122519s
	[INFO] 10.244.1.2:45395 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107834s
	[INFO] 10.244.1.2:45571 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099939s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-457103
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-457103
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=multinode-457103
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_40_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:39:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-457103
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:50:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:46:37 +0000   Mon, 30 Sep 2024 11:40:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    multinode-457103
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1745f688465a4020b2f275f7e9845e3f
	  System UUID:                1745f688-465a-4020-b2f2-75f7e9845e3f
	  Boot ID:                    470a25c9-ac55-4ff5-b4fb-26958d2f4a3a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hwwdc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 coredns-7c65d6cfc9-cchmp                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-457103                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-8bjzm                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-457103             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-457103    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-77tjs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-457103             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-457103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-457103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-457103 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-457103 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-457103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-457103 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-457103 event: Registered Node multinode-457103 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-457103 status is now: NodeReady
	  Normal  Starting                 4m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node multinode-457103 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node multinode-457103 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m14s)  kubelet          Node multinode-457103 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                   node-controller  Node multinode-457103 event: Registered Node multinode-457103 in Controller
	
	
	Name:               multinode-457103-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-457103-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=multinode-457103
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_30T11_47_18_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:47:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-457103-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:48:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:49:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:49:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:49:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 30 Sep 2024 11:47:48 +0000   Mon, 30 Sep 2024 11:49:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    multinode-457103-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 864d40ee5b994d73a12720b6ae83e95c
	  System UUID:                864d40ee-5b99-4d73-a127-20b6ae83e95c
	  Boot ID:                    00fb2fa0-efa5-41c8-806c-9abcc6a638b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wxt9x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 kindnet-rb7dr              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m59s
	  kube-system                 kube-proxy-dg4xz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m23s                  kube-proxy       
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m59s (x2 over 10m)    kubelet          Node multinode-457103-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s (x2 over 10m)    kubelet          Node multinode-457103-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x2 over 10m)    kubelet          Node multinode-457103-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m39s                  kubelet          Node multinode-457103-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m29s (x2 over 3m29s)  kubelet          Node multinode-457103-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s (x2 over 3m29s)  kubelet          Node multinode-457103-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s (x2 over 3m29s)  kubelet          Node multinode-457103-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m10s                  kubelet          Node multinode-457103-m02 status is now: NodeReady
	  Normal  NodeNotReady             106s                   node-controller  Node multinode-457103-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.067905] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060442] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.153422] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.143437] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.297361] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.063117] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +5.155123] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.057543] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.484402] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.079463] kauditd_printk_skb: 69 callbacks suppressed
	[Sep30 11:40] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +0.141675] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.299430] kauditd_printk_skb: 60 callbacks suppressed
	[Sep30 11:41] kauditd_printk_skb: 14 callbacks suppressed
	[Sep30 11:46] systemd-fstab-generator[2668]: Ignoring "noauto" option for root device
	[  +0.153090] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.179178] systemd-fstab-generator[2694]: Ignoring "noauto" option for root device
	[  +0.157622] systemd-fstab-generator[2706]: Ignoring "noauto" option for root device
	[  +0.288908] systemd-fstab-generator[2734]: Ignoring "noauto" option for root device
	[  +0.695516] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +1.986089] systemd-fstab-generator[2950]: Ignoring "noauto" option for root device
	[  +5.801799] kauditd_printk_skb: 184 callbacks suppressed
	[ +15.099055] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.096138] kauditd_printk_skb: 36 callbacks suppressed
	[Sep30 11:47] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [92157ff23e9a4b9f33f1542e01d076c96cbd606a4eaff1122bd48ec13f76dcbd] <==
	{"level":"info","ts":"2024-09-30T11:46:33.971494Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","added-peer-id":"28ab8665a749e374","added-peer-peer-urls":["https://192.168.39.219:2380"]}
	{"level":"info","ts":"2024-09-30T11:46:33.971643Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:46:33.971695Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:46:33.973810Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T11:46:33.983956Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T11:46:33.986862Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"28ab8665a749e374","initial-advertise-peer-urls":["https://192.168.39.219:2380"],"listen-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.219:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T11:46:33.988167Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T11:46:33.984194Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:46:33.994681Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:46:35.517796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:46:35.517920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:46:35.517979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgPreVoteResp from 28ab8665a749e374 at term 2"}
	{"level":"info","ts":"2024-09-30T11:46:35.518025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.518050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgVoteResp from 28ab8665a749e374 at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.518089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.518115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28ab8665a749e374 elected leader 28ab8665a749e374 at term 3"}
	{"level":"info","ts":"2024-09-30T11:46:35.522867Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28ab8665a749e374","local-member-attributes":"{Name:multinode-457103 ClientURLs:[https://192.168.39.219:2379]}","request-path":"/0/members/28ab8665a749e374/attributes","cluster-id":"14fc06d09ccfd789","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T11:46:35.522978Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:46:35.523540Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:46:35.524448Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T11:46:35.524561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T11:46:35.525460Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T11:46:35.525642Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T11:46:35.525675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T11:46:35.525460Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.219:2379"}
	
	
	==> etcd [985558f9028b4dfd002fe6de7dbd898fc45666bb7d1e60a36a904cb78667aa28] <==
	{"level":"info","ts":"2024-09-30T11:39:54.844083Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T11:39:54.844132Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T11:39:54.803151Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"28ab8665a749e374","local-member-attributes":"{Name:multinode-457103 ClientURLs:[https://192.168.39.219:2379]}","request-path":"/0/members/28ab8665a749e374/attributes","cluster-id":"14fc06d09ccfd789","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T11:39:54.847488Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:39:54.847600Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:39:54.847645Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:40:07.485787Z","caller":"traceutil/trace.go:171","msg":"trace[913904409] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"118.633232ms","start":"2024-09-30T11:40:07.367140Z","end":"2024-09-30T11:40:07.485774Z","steps":["trace[913904409] 'process raft request'  (duration: 118.532637ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:40:47.030433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.94418ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16389885759208307633 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-457103-m02.17fa02c3bea6ab6a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-457103-m02.17fa02c3bea6ab6a\" value_size:646 lease:7166513722353530860 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-30T11:40:47.030823Z","caller":"traceutil/trace.go:171","msg":"trace[1785805709] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"212.433999ms","start":"2024-09-30T11:40:46.818358Z","end":"2024-09-30T11:40:47.030792Z","steps":["trace[1785805709] 'process raft request'  (duration: 80.397287ms)","trace[1785805709] 'compare'  (duration: 130.79024ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T11:40:53.384655Z","caller":"traceutil/trace.go:171","msg":"trace[1454558008] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"205.370784ms","start":"2024-09-30T11:40:53.179270Z","end":"2024-09-30T11:40:53.384641Z","steps":["trace[1454558008] 'process raft request'  (duration: 204.955318ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:41:43.612976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.539966ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T11:41:43.613540Z","caller":"traceutil/trace.go:171","msg":"trace[2022186823] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:613; }","duration":"175.915098ms","start":"2024-09-30T11:41:43.437297Z","end":"2024-09-30T11:41:43.613212Z","steps":["trace[2022186823] 'range keys from in-memory index tree'  (duration: 175.515567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-30T11:41:43.617977Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.409112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T11:41:43.618037Z","caller":"traceutil/trace.go:171","msg":"trace[247210808] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:613; }","duration":"112.484138ms","start":"2024-09-30T11:41:43.505537Z","end":"2024-09-30T11:41:43.618022Z","steps":["trace[247210808] 'agreement among raft nodes before linearized reading'  (duration: 112.387056ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T11:41:43.618282Z","caller":"traceutil/trace.go:171","msg":"trace[1798779071] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"112.167649ms","start":"2024-09-30T11:41:43.505606Z","end":"2024-09-30T11:41:43.617774Z","steps":["trace[1798779071] 'read index received'  (duration: 100.194025ms)","trace[1798779071] 'applied index is now lower than readState.Index'  (duration: 11.972587ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-30T11:44:57.648085Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-30T11:44:57.648236Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-457103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"]}
	{"level":"warn","ts":"2024-09-30T11:44:57.650117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:44:57.650307Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:44:57.689663Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.219:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-30T11:44:57.689698Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.219:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-30T11:44:57.689751Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"28ab8665a749e374","current-leader-member-id":"28ab8665a749e374"}
	{"level":"info","ts":"2024-09-30T11:44:57.692584Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:44:57.692791Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-09-30T11:44:57.692851Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-457103","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"]}
	
	
	==> kernel <==
	 11:50:46 up 11 min,  0 users,  load average: 0.17, 0.16, 0.10
	Linux multinode-457103 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1692df8a76bd87f03ec62dedf25dd6dd91af87fbb5e4cda2b19d45ace6671332] <==
	I0930 11:44:16.271981       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:26.271915       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:26.272120       1 main.go:299] handling current node
	I0930 11:44:26.272158       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:26.272192       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:26.272464       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:26.272497       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:36.266987       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:36.267155       1 main.go:299] handling current node
	I0930 11:44:36.267226       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:36.267234       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:36.267530       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:36.267556       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:46.271058       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:46.271142       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:44:46.271314       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:46.271339       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:46.271526       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:46.271551       1 main.go:299] handling current node
	I0930 11:44:56.265017       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0930 11:44:56.265186       1 main.go:322] Node multinode-457103-m03 has CIDR [10.244.3.0/24] 
	I0930 11:44:56.265514       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:44:56.265555       1 main.go:299] handling current node
	I0930 11:44:56.265569       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:44:56.265575       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e42d74aa167a26191c7dbb4d154ce9c5e2a93c1c7f8e45f149848ea8336b9150] <==
	I0930 11:49:39.482685       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:49:49.481924       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:49:49.482063       1 main.go:299] handling current node
	I0930 11:49:49.482095       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:49:49.482168       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:49:59.482840       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:49:59.483305       1 main.go:299] handling current node
	I0930 11:49:59.483360       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:49:59.483506       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:50:09.484542       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:50:09.484591       1 main.go:299] handling current node
	I0930 11:50:09.484607       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:50:09.484612       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:50:19.489637       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:50:19.489732       1 main.go:299] handling current node
	I0930 11:50:19.489760       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:50:19.489777       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:50:29.491250       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:50:29.491464       1 main.go:299] handling current node
	I0930 11:50:29.491526       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:50:29.491537       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	I0930 11:50:39.482738       1 main.go:295] Handling node with IPs: map[192.168.39.219:{}]
	I0930 11:50:39.482818       1 main.go:299] handling current node
	I0930 11:50:39.482877       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0930 11:50:39.482884       1 main.go:322] Node multinode-457103-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [67c22defd31e230ac102f04f3797815bc79d2968016924765aaaea0c9152da65] <==
	I0930 11:46:37.006115       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 11:46:37.006354       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 11:46:37.006499       1 aggregator.go:171] initial CRD sync complete...
	I0930 11:46:37.006508       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 11:46:37.006512       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 11:46:37.006517       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:46:37.006878       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0930 11:46:37.006946       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:46:37.013206       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 11:46:37.056765       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 11:46:37.056859       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 11:46:37.057006       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E0930 11:46:37.063788       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 11:46:37.064371       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 11:46:37.070962       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 11:46:37.071034       1 policy_source.go:224] refreshing policies
	I0930 11:46:37.074709       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:46:37.861966       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 11:46:39.339262       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 11:46:39.484747       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 11:46:39.505607       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 11:46:39.585133       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 11:46:39.596632       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 11:46:40.354285       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0930 11:46:40.699513       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [81d3a3b58452b0364ea33b26628b375d2c731ef40ae280e4c08ebc6d431404bd] <==
	W0930 11:44:57.680829       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.680858       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.680889       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.680914       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681125       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681174       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681220       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681253       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681285       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681322       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681355       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.681454       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.683571       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684133       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684183       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684219       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.684254       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685229       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685275       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685302       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685327       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685366       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.685524       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.686249       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0930 11:44:57.686298       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [14bdb366a11a68d7760f5e279090aa101af3752417ff98374a222a68a0200f06] <==
	I0930 11:42:32.295361       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:42:32.296124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.387270       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:42:33.388612       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-457103-m03\" does not exist"
	I0930 11:42:33.400322       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-457103-m03" podCIDRs=["10.244.3.0/24"]
	I0930 11:42:33.400363       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.400462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.410037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:33.817223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:34.161895       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:38.454702       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:43.710884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:52.103851       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:42:52.104004       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:52.116883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:42:53.383193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:43:33.400260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:43:33.400660       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m03"
	I0930 11:43:33.415757       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:43:33.466240       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.629707ms"
	I0930 11:43:33.466597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.756µs"
	I0930 11:43:38.468232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:43:38.484490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:43:38.559900       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:43:48.645037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	
	
	==> kube-controller-manager [f90b4779f01ddcba9e5c18635b0cc4f9d51c8939f6ee232f1b054afe68793b7e] <==
	I0930 11:47:57.209145       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-457103-m03" podCIDRs=["10.244.2.0/24"]
	I0930 11:47:57.209275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.209982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.220708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.594039       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:47:57.918297       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:00.379973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:07.261833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:15.822161       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m03"
	I0930 11:48:15.822379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:15.835620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:20.377949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:20.704450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:20.734008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:21.209746       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m03"
	I0930 11:48:21.209908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-457103-m02"
	I0930 11:49:00.395880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:49:00.414764       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:49:00.459489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.024323ms"
	I0930 11:49:00.461732       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="112.399µs"
	I0930 11:49:05.483668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-457103-m02"
	I0930 11:49:20.285059       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dkgwm"
	I0930 11:49:20.313355       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-dkgwm"
	I0930 11:49:20.313491       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-nr59l"
	I0930 11:49:20.339939       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-nr59l"
	
	
	==> kube-proxy [27d4e50ea8999a734dd96afc968d04ca8361ca54b9d260b310c1f26f1319638e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:40:05.504472       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:40:05.605877       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
	E0930 11:40:05.607818       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:40:05.670376       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:40:05.670469       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:40:05.670494       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:40:05.674557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:40:05.674824       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:40:05.674857       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:40:05.677059       1 config.go:199] "Starting service config controller"
	I0930 11:40:05.677101       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:40:05.677126       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:40:05.677130       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:40:05.677846       1 config.go:328] "Starting node config controller"
	I0930 11:40:05.677875       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:40:05.777161       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 11:40:05.777244       1 shared_informer.go:320] Caches are synced for service config
	I0930 11:40:05.778703       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7258510df4e37ff0ac5b9151cff2cf76bf949b63e124e36bddc79059f891683e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 11:46:38.838355       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 11:46:38.872647       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
	E0930 11:46:38.872749       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 11:46:38.958626       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 11:46:38.958659       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 11:46:38.958683       1 server_linux.go:169] "Using iptables Proxier"
	I0930 11:46:38.975570       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 11:46:38.975958       1 server.go:483] "Version info" version="v1.31.1"
	I0930 11:46:38.975974       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:46:38.985008       1 config.go:199] "Starting service config controller"
	I0930 11:46:38.985053       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 11:46:38.985079       1 config.go:105] "Starting endpoint slice config controller"
	I0930 11:46:38.985084       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 11:46:38.985112       1 config.go:328] "Starting node config controller"
	I0930 11:46:38.985134       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 11:46:39.085232       1 shared_informer.go:320] Caches are synced for node config
	I0930 11:46:39.085282       1 shared_informer.go:320] Caches are synced for service config
	I0930 11:46:39.085303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d0c657343cf2c1774bc1198f6ff78d9879e971cc9371e871e2a2c0808dbf8867] <==
	E0930 11:39:56.589230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.441225       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 11:39:57.441289       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 11:39:57.467689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 11:39:57.467825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.514752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0930 11:39:57.514874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.581376       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 11:39:57.581500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.627985       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 11:39:57.628045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.665381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 11:39:57.665478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.718176       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 11:39:57.718231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.760155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 11:39:57.760213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.804591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 11:39:57.804644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.876053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 11:39:57.876112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 11:39:57.947148       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0930 11:39:57.947325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0930 11:39:59.657690       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0930 11:44:57.645601       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [eaf6af32d43e61a53fa73a967127e6a8fa1400ad8f63f18d96a881a4a682da6d] <==
	I0930 11:46:34.681821       1 serving.go:386] Generated self-signed cert in-memory
	W0930 11:46:36.929216       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 11:46:36.929475       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 11:46:36.929567       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 11:46:36.929597       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 11:46:36.968599       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 11:46:36.968703       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:46:36.980381       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 11:46:36.980664       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 11:46:36.980515       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 11:46:36.980573       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 11:46:37.082932       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 11:49:32 multinode-457103 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:49:32 multinode-457103 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:49:32 multinode-457103 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:49:32 multinode-457103 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:49:32 multinode-457103 kubelet[2957]: E0930 11:49:32.745226    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696972744916342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:49:32 multinode-457103 kubelet[2957]: E0930 11:49:32.745252    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696972744916342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:49:42 multinode-457103 kubelet[2957]: E0930 11:49:42.747102    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696982746724316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:49:42 multinode-457103 kubelet[2957]: E0930 11:49:42.747143    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696982746724316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:49:52 multinode-457103 kubelet[2957]: E0930 11:49:52.749097    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696992748737987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:49:52 multinode-457103 kubelet[2957]: E0930 11:49:52.749359    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727696992748737987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:02 multinode-457103 kubelet[2957]: E0930 11:50:02.751714    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697002751131852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:02 multinode-457103 kubelet[2957]: E0930 11:50:02.752021    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697002751131852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:12 multinode-457103 kubelet[2957]: E0930 11:50:12.754915    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697012753899375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:12 multinode-457103 kubelet[2957]: E0930 11:50:12.754970    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697012753899375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:22 multinode-457103 kubelet[2957]: E0930 11:50:22.756516    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697022756114741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:22 multinode-457103 kubelet[2957]: E0930 11:50:22.756794    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697022756114741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:32 multinode-457103 kubelet[2957]: E0930 11:50:32.731508    2957 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 30 11:50:32 multinode-457103 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 30 11:50:32 multinode-457103 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 30 11:50:32 multinode-457103 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 30 11:50:32 multinode-457103 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 30 11:50:32 multinode-457103 kubelet[2957]: E0930 11:50:32.759943    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697032759148906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:32 multinode-457103 kubelet[2957]: E0930 11:50:32.759990    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697032759148906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:42 multinode-457103 kubelet[2957]: E0930 11:50:42.761832    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697042761376124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 30 11:50:42 multinode-457103 kubelet[2957]: E0930 11:50:42.761876    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697042761376124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:50:45.497538   47424 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19734-3842/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-457103 -n multinode-457103
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-457103 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.77s)

                                                
                                    
x
+
TestPreload (166.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-299114 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0930 11:55:18.065892   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-299114 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.022230795s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-299114 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-299114 image pull gcr.io/k8s-minikube/busybox: (1.322201147s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-299114
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-299114: (7.292644092s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-299114 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-299114 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.523691612s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-299114 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-30 11:57:44.376506701 +0000 UTC m=+5839.945944426
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-299114 -n test-preload-299114
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-299114 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-299114 logs -n 25: (1.111644546s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103 sudo cat                                       | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m03_multinode-457103.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt                       | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m02:/home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n                                                                 | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | multinode-457103-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-457103 ssh -n multinode-457103-m02 sudo cat                                   | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | /home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-457103 node stop m03                                                          | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	| node    | multinode-457103 node start                                                             | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC | 30 Sep 24 11:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-457103                                                                | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC |                     |
	| stop    | -p multinode-457103                                                                     | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:42 UTC |                     |
	| start   | -p multinode-457103                                                                     | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:44 UTC | 30 Sep 24 11:48 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-457103                                                                | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:48 UTC |                     |
	| node    | multinode-457103 node delete                                                            | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:48 UTC | 30 Sep 24 11:48 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-457103 stop                                                                   | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:48 UTC |                     |
	| start   | -p multinode-457103                                                                     | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:50 UTC | 30 Sep 24 11:54 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-457103                                                                | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:54 UTC |                     |
	| start   | -p multinode-457103-m02                                                                 | multinode-457103-m02 | jenkins | v1.34.0 | 30 Sep 24 11:54 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-457103-m03                                                                 | multinode-457103-m03 | jenkins | v1.34.0 | 30 Sep 24 11:54 UTC | 30 Sep 24 11:54 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-457103                                                                 | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:54 UTC |                     |
	| delete  | -p multinode-457103-m03                                                                 | multinode-457103-m03 | jenkins | v1.34.0 | 30 Sep 24 11:54 UTC | 30 Sep 24 11:54 UTC |
	| delete  | -p multinode-457103                                                                     | multinode-457103     | jenkins | v1.34.0 | 30 Sep 24 11:54 UTC | 30 Sep 24 11:55 UTC |
	| start   | -p test-preload-299114                                                                  | test-preload-299114  | jenkins | v1.34.0 | 30 Sep 24 11:55 UTC | 30 Sep 24 11:56 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-299114 image pull                                                          | test-preload-299114  | jenkins | v1.34.0 | 30 Sep 24 11:56 UTC | 30 Sep 24 11:56 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-299114                                                                  | test-preload-299114  | jenkins | v1.34.0 | 30 Sep 24 11:56 UTC | 30 Sep 24 11:56 UTC |
	| start   | -p test-preload-299114                                                                  | test-preload-299114  | jenkins | v1.34.0 | 30 Sep 24 11:56 UTC | 30 Sep 24 11:57 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-299114 image list                                                          | test-preload-299114  | jenkins | v1.34.0 | 30 Sep 24 11:57 UTC | 30 Sep 24 11:57 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 11:56:39
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 11:56:39.664373   49883 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:56:39.664503   49883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:56:39.664512   49883 out.go:358] Setting ErrFile to fd 2...
	I0930 11:56:39.664516   49883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:56:39.664680   49883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:56:39.665186   49883 out.go:352] Setting JSON to false
	I0930 11:56:39.666068   49883 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5947,"bootTime":1727691453,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:56:39.666169   49883 start.go:139] virtualization: kvm guest
	I0930 11:56:39.668653   49883 out.go:177] * [test-preload-299114] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:56:39.670260   49883 notify.go:220] Checking for updates...
	I0930 11:56:39.670264   49883 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:56:39.672003   49883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:56:39.673640   49883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:56:39.675263   49883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:56:39.676836   49883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:56:39.678275   49883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:56:39.680116   49883 config.go:182] Loaded profile config "test-preload-299114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0930 11:56:39.680523   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:56:39.680574   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:56:39.695225   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40957
	I0930 11:56:39.695692   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:56:39.696292   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:56:39.696322   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:56:39.696658   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:56:39.696842   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:56:39.698948   49883 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0930 11:56:39.700334   49883 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:56:39.700652   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:56:39.700687   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:56:39.715598   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0930 11:56:39.716057   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:56:39.716676   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:56:39.716705   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:56:39.717012   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:56:39.717186   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:56:39.753263   49883 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:56:39.754531   49883 start.go:297] selected driver: kvm2
	I0930 11:56:39.754555   49883 start.go:901] validating driver "kvm2" against &{Name:test-preload-299114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-299114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:56:39.754663   49883 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:56:39.755379   49883 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:56:39.755467   49883 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 11:56:39.771378   49883 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 11:56:39.771772   49883 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:56:39.771805   49883 cni.go:84] Creating CNI manager for ""
	I0930 11:56:39.771846   49883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 11:56:39.771896   49883 start.go:340] cluster config:
	{Name:test-preload-299114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-299114 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:56:39.771995   49883 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 11:56:39.774009   49883 out.go:177] * Starting "test-preload-299114" primary control-plane node in "test-preload-299114" cluster
	I0930 11:56:39.775292   49883 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0930 11:56:39.801809   49883 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0930 11:56:39.801843   49883 cache.go:56] Caching tarball of preloaded images
	I0930 11:56:39.802012   49883 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0930 11:56:39.804204   49883 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0930 11:56:39.805802   49883 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0930 11:56:39.829419   49883 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0930 11:56:43.107058   49883 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0930 11:56:43.107162   49883 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0930 11:56:43.947229   49883 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0930 11:56:43.947366   49883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/config.json ...
	I0930 11:56:43.947595   49883 start.go:360] acquireMachinesLock for test-preload-299114: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 11:56:43.947657   49883 start.go:364] duration metric: took 41.058µs to acquireMachinesLock for "test-preload-299114"
	I0930 11:56:43.947672   49883 start.go:96] Skipping create...Using existing machine configuration
	I0930 11:56:43.947677   49883 fix.go:54] fixHost starting: 
	I0930 11:56:43.947935   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:56:43.947968   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:56:43.962701   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I0930 11:56:43.963194   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:56:43.963731   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:56:43.963761   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:56:43.964088   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:56:43.964291   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:56:43.964451   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetState
	I0930 11:56:43.966141   49883 fix.go:112] recreateIfNeeded on test-preload-299114: state=Stopped err=<nil>
	I0930 11:56:43.966167   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	W0930 11:56:43.966359   49883 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 11:56:43.968930   49883 out.go:177] * Restarting existing kvm2 VM for "test-preload-299114" ...
	I0930 11:56:43.970613   49883 main.go:141] libmachine: (test-preload-299114) Calling .Start
	I0930 11:56:43.970794   49883 main.go:141] libmachine: (test-preload-299114) Ensuring networks are active...
	I0930 11:56:43.971598   49883 main.go:141] libmachine: (test-preload-299114) Ensuring network default is active
	I0930 11:56:43.971921   49883 main.go:141] libmachine: (test-preload-299114) Ensuring network mk-test-preload-299114 is active
	I0930 11:56:43.972238   49883 main.go:141] libmachine: (test-preload-299114) Getting domain xml...
	I0930 11:56:43.972949   49883 main.go:141] libmachine: (test-preload-299114) Creating domain...
	I0930 11:56:45.173177   49883 main.go:141] libmachine: (test-preload-299114) Waiting to get IP...
	I0930 11:56:45.174013   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:45.174393   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:45.174457   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:45.174379   49935 retry.go:31] will retry after 223.082738ms: waiting for machine to come up
	I0930 11:56:45.398707   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:45.399103   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:45.399123   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:45.399068   49935 retry.go:31] will retry after 282.314852ms: waiting for machine to come up
	I0930 11:56:45.682716   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:45.683085   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:45.683116   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:45.683031   49935 retry.go:31] will retry after 422.167492ms: waiting for machine to come up
	I0930 11:56:46.106547   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:46.107021   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:46.107072   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:46.106995   49935 retry.go:31] will retry after 375.82503ms: waiting for machine to come up
	I0930 11:56:46.484732   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:46.485178   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:46.485204   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:46.485134   49935 retry.go:31] will retry after 723.857847ms: waiting for machine to come up
	I0930 11:56:47.211189   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:47.211549   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:47.211578   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:47.211504   49935 retry.go:31] will retry after 909.49516ms: waiting for machine to come up
	I0930 11:56:48.122545   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:48.122973   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:48.123001   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:48.122932   49935 retry.go:31] will retry after 1.03638625s: waiting for machine to come up
	I0930 11:56:49.161133   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:49.161530   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:49.161571   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:49.161489   49935 retry.go:31] will retry after 1.417503499s: waiting for machine to come up
	I0930 11:56:50.581257   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:50.581629   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:50.581656   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:50.581581   49935 retry.go:31] will retry after 1.334380627s: waiting for machine to come up
	I0930 11:56:51.918140   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:51.918648   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:51.918684   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:51.918590   49935 retry.go:31] will retry after 2.155267853s: waiting for machine to come up
	I0930 11:56:54.075443   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:54.075935   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:54.075971   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:54.075885   49935 retry.go:31] will retry after 2.15124875s: waiting for machine to come up
	I0930 11:56:56.230381   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:56.230862   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:56.230881   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:56.230831   49935 retry.go:31] will retry after 3.282137665s: waiting for machine to come up
	I0930 11:56:59.514722   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:56:59.515092   49883 main.go:141] libmachine: (test-preload-299114) DBG | unable to find current IP address of domain test-preload-299114 in network mk-test-preload-299114
	I0930 11:56:59.515134   49883 main.go:141] libmachine: (test-preload-299114) DBG | I0930 11:56:59.515044   49935 retry.go:31] will retry after 4.11119563s: waiting for machine to come up
	I0930 11:57:03.631272   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.631744   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has current primary IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.631771   49883 main.go:141] libmachine: (test-preload-299114) Found IP for machine: 192.168.39.169
	I0930 11:57:03.631785   49883 main.go:141] libmachine: (test-preload-299114) Reserving static IP address...
	I0930 11:57:03.632173   49883 main.go:141] libmachine: (test-preload-299114) Reserved static IP address: 192.168.39.169
	I0930 11:57:03.632228   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "test-preload-299114", mac: "52:54:00:81:4a:2e", ip: "192.168.39.169"} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:03.632244   49883 main.go:141] libmachine: (test-preload-299114) Waiting for SSH to be available...
	I0930 11:57:03.632263   49883 main.go:141] libmachine: (test-preload-299114) DBG | skip adding static IP to network mk-test-preload-299114 - found existing host DHCP lease matching {name: "test-preload-299114", mac: "52:54:00:81:4a:2e", ip: "192.168.39.169"}
	I0930 11:57:03.632281   49883 main.go:141] libmachine: (test-preload-299114) DBG | Getting to WaitForSSH function...
	I0930 11:57:03.634490   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.634787   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:03.634811   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.634922   49883 main.go:141] libmachine: (test-preload-299114) DBG | Using SSH client type: external
	I0930 11:57:03.634946   49883 main.go:141] libmachine: (test-preload-299114) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa (-rw-------)
	I0930 11:57:03.634971   49883 main.go:141] libmachine: (test-preload-299114) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 11:57:03.634979   49883 main.go:141] libmachine: (test-preload-299114) DBG | About to run SSH command:
	I0930 11:57:03.634988   49883 main.go:141] libmachine: (test-preload-299114) DBG | exit 0
	I0930 11:57:03.761972   49883 main.go:141] libmachine: (test-preload-299114) DBG | SSH cmd err, output: <nil>: 
	I0930 11:57:03.762362   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetConfigRaw
	I0930 11:57:03.763005   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetIP
	I0930 11:57:03.765446   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.765804   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:03.765836   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.766105   49883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/config.json ...
	I0930 11:57:03.766360   49883 machine.go:93] provisionDockerMachine start ...
	I0930 11:57:03.766384   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:03.766595   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:03.768893   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.769228   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:03.769256   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.769476   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:03.769663   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:03.769793   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:03.769908   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:03.770061   49883 main.go:141] libmachine: Using SSH client type: native
	I0930 11:57:03.770400   49883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0930 11:57:03.770416   49883 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 11:57:03.882289   49883 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 11:57:03.882328   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetMachineName
	I0930 11:57:03.882581   49883 buildroot.go:166] provisioning hostname "test-preload-299114"
	I0930 11:57:03.882609   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetMachineName
	I0930 11:57:03.882813   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:03.885401   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.885686   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:03.885715   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:03.885860   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:03.886030   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:03.886183   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:03.886298   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:03.886445   49883 main.go:141] libmachine: Using SSH client type: native
	I0930 11:57:03.886619   49883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0930 11:57:03.886630   49883 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-299114 && echo "test-preload-299114" | sudo tee /etc/hostname
	I0930 11:57:04.011777   49883 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-299114
	
	I0930 11:57:04.011811   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.014487   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.014849   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.014889   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.015028   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:04.015201   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.015364   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.015474   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:04.015629   49883 main.go:141] libmachine: Using SSH client type: native
	I0930 11:57:04.015833   49883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0930 11:57:04.015849   49883 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-299114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-299114/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-299114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 11:57:04.135464   49883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 11:57:04.135493   49883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 11:57:04.135518   49883 buildroot.go:174] setting up certificates
	I0930 11:57:04.135527   49883 provision.go:84] configureAuth start
	I0930 11:57:04.135538   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetMachineName
	I0930 11:57:04.135830   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetIP
	I0930 11:57:04.138184   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.138496   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.138530   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.138661   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.140824   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.141192   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.141215   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.141381   49883 provision.go:143] copyHostCerts
	I0930 11:57:04.141447   49883 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 11:57:04.141457   49883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 11:57:04.141525   49883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 11:57:04.141653   49883 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 11:57:04.141664   49883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 11:57:04.141707   49883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 11:57:04.141773   49883 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 11:57:04.141780   49883 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 11:57:04.141804   49883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 11:57:04.141857   49883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.test-preload-299114 san=[127.0.0.1 192.168.39.169 localhost minikube test-preload-299114]
	I0930 11:57:04.243462   49883 provision.go:177] copyRemoteCerts
	I0930 11:57:04.243515   49883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 11:57:04.243550   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.246125   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.246461   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.246494   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.246608   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:04.246826   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.247003   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:04.247138   49883 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa Username:docker}
	I0930 11:57:04.336966   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 11:57:04.361768   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0930 11:57:04.386826   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 11:57:04.412027   49883 provision.go:87] duration metric: took 276.487754ms to configureAuth
	I0930 11:57:04.412052   49883 buildroot.go:189] setting minikube options for container-runtime
	I0930 11:57:04.412211   49883 config.go:182] Loaded profile config "test-preload-299114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0930 11:57:04.412279   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.414744   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.414993   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.415020   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.415192   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:04.415415   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.415597   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.415705   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:04.415848   49883 main.go:141] libmachine: Using SSH client type: native
	I0930 11:57:04.416040   49883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0930 11:57:04.416056   49883 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 11:57:04.660105   49883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 11:57:04.660134   49883 machine.go:96] duration metric: took 893.760715ms to provisionDockerMachine
	I0930 11:57:04.660144   49883 start.go:293] postStartSetup for "test-preload-299114" (driver="kvm2")
	I0930 11:57:04.660155   49883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 11:57:04.660170   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:04.660423   49883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 11:57:04.660449   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.662889   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.663237   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.663265   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.663417   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:04.663633   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.663784   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:04.663939   49883 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa Username:docker}
	I0930 11:57:04.749170   49883 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 11:57:04.753383   49883 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 11:57:04.753413   49883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 11:57:04.753472   49883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 11:57:04.753546   49883 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 11:57:04.753656   49883 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 11:57:04.763307   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:57:04.787363   49883 start.go:296] duration metric: took 127.201177ms for postStartSetup
	I0930 11:57:04.787410   49883 fix.go:56] duration metric: took 20.839731637s for fixHost
	I0930 11:57:04.787434   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.790292   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.790637   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.790670   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.790820   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:04.791004   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.791156   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.791276   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:04.791404   49883 main.go:141] libmachine: Using SSH client type: native
	I0930 11:57:04.791636   49883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I0930 11:57:04.791650   49883 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 11:57:04.902478   49883 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727697424.878307406
	
	I0930 11:57:04.902502   49883 fix.go:216] guest clock: 1727697424.878307406
	I0930 11:57:04.902509   49883 fix.go:229] Guest: 2024-09-30 11:57:04.878307406 +0000 UTC Remote: 2024-09-30 11:57:04.787414899 +0000 UTC m=+25.159241269 (delta=90.892507ms)
	I0930 11:57:04.902531   49883 fix.go:200] guest clock delta is within tolerance: 90.892507ms
	I0930 11:57:04.902535   49883 start.go:83] releasing machines lock for "test-preload-299114", held for 20.954868994s
	I0930 11:57:04.902556   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:04.902802   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetIP
	I0930 11:57:04.905175   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.905487   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.905515   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.905713   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:04.906190   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:04.906359   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:04.906449   49883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 11:57:04.906500   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.906543   49883 ssh_runner.go:195] Run: cat /version.json
	I0930 11:57:04.906569   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:04.909219   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.909588   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.909635   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.909656   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.909786   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:04.909949   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.909977   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:04.909997   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:04.910100   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:04.910173   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:04.910241   49883 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa Username:docker}
	I0930 11:57:04.910315   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:04.910446   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:04.910571   49883 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa Username:docker}
	I0930 11:57:05.024081   49883 ssh_runner.go:195] Run: systemctl --version
	I0930 11:57:05.030258   49883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 11:57:05.178829   49883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 11:57:05.185848   49883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 11:57:05.185908   49883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 11:57:05.202991   49883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 11:57:05.203016   49883 start.go:495] detecting cgroup driver to use...
	I0930 11:57:05.203070   49883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 11:57:05.218882   49883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 11:57:05.233055   49883 docker.go:217] disabling cri-docker service (if available) ...
	I0930 11:57:05.233106   49883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 11:57:05.246443   49883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 11:57:05.260464   49883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 11:57:05.374591   49883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 11:57:05.523170   49883 docker.go:233] disabling docker service ...
	I0930 11:57:05.523233   49883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 11:57:05.538264   49883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 11:57:05.551754   49883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 11:57:05.690338   49883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 11:57:05.814815   49883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 11:57:05.829973   49883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 11:57:05.849844   49883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0930 11:57:05.849926   49883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:57:05.861153   49883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 11:57:05.861216   49883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:57:05.872571   49883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:57:05.883826   49883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:57:05.895186   49883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 11:57:05.906471   49883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:57:05.917100   49883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:57:05.934881   49883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 11:57:05.946262   49883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 11:57:05.957323   49883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 11:57:05.957382   49883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 11:57:05.971741   49883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 11:57:05.982160   49883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:57:06.101970   49883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 11:57:06.200083   49883 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 11:57:06.200168   49883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 11:57:06.205085   49883 start.go:563] Will wait 60s for crictl version
	I0930 11:57:06.205135   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:06.208962   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 11:57:06.248181   49883 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 11:57:06.248266   49883 ssh_runner.go:195] Run: crio --version
	I0930 11:57:06.275726   49883 ssh_runner.go:195] Run: crio --version
	I0930 11:57:06.307257   49883 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0930 11:57:06.308406   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetIP
	I0930 11:57:06.311277   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:06.311643   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:06.311673   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:06.311848   49883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0930 11:57:06.316225   49883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:57:06.329488   49883 kubeadm.go:883] updating cluster {Name:test-preload-299114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-299114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 11:57:06.329597   49883 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0930 11:57:06.329659   49883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:57:06.369008   49883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0930 11:57:06.369090   49883 ssh_runner.go:195] Run: which lz4
	I0930 11:57:06.373480   49883 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 11:57:06.377807   49883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 11:57:06.377835   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0930 11:57:07.954354   49883 crio.go:462] duration metric: took 1.580915228s to copy over tarball
	I0930 11:57:07.954416   49883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 11:57:10.395724   49883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.441280984s)
	I0930 11:57:10.395755   49883 crio.go:469] duration metric: took 2.441375634s to extract the tarball
	I0930 11:57:10.395763   49883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 11:57:10.437189   49883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 11:57:10.490412   49883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0930 11:57:10.490442   49883 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 11:57:10.490513   49883 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:57:10.490530   49883 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 11:57:10.490540   49883 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 11:57:10.490565   49883 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0930 11:57:10.490594   49883 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 11:57:10.490567   49883 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 11:57:10.490628   49883 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0930 11:57:10.490598   49883 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 11:57:10.491869   49883 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 11:57:10.491881   49883 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 11:57:10.491879   49883 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 11:57:10.491899   49883 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:57:10.491872   49883 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0930 11:57:10.491986   49883 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 11:57:10.491989   49883 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0930 11:57:10.492037   49883 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 11:57:10.666596   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 11:57:10.667704   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0930 11:57:10.669560   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0930 11:57:10.669730   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0930 11:57:10.671703   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0930 11:57:10.679900   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0930 11:57:10.755072   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0930 11:57:10.804504   49883 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0930 11:57:10.804552   49883 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 11:57:10.804563   49883 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0930 11:57:10.804592   49883 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0930 11:57:10.804599   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:10.804631   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:10.851997   49883 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0930 11:57:10.852023   49883 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0930 11:57:10.852042   49883 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0930 11:57:10.852045   49883 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0930 11:57:10.852074   49883 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0930 11:57:10.852052   49883 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0930 11:57:10.852094   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:10.852110   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:10.852113   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:10.858633   49883 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0930 11:57:10.858673   49883 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0930 11:57:10.858695   49883 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0930 11:57:10.858717   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:10.858730   49883 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0930 11:57:10.858767   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 11:57:10.858773   49883 ssh_runner.go:195] Run: which crictl
	I0930 11:57:10.858820   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 11:57:10.861973   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0930 11:57:10.862985   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0930 11:57:10.863060   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0930 11:57:10.901606   49883 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:57:10.983892   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 11:57:10.983938   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0930 11:57:10.994868   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 11:57:10.994927   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0930 11:57:10.995002   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0930 11:57:10.995069   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0930 11:57:10.995144   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0930 11:57:11.166624   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0930 11:57:11.166686   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0930 11:57:11.189458   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0930 11:57:11.189474   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0930 11:57:11.189556   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0930 11:57:11.189596   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0930 11:57:11.189671   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0930 11:57:11.336413   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0930 11:57:11.336492   49883 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0930 11:57:11.336575   49883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0930 11:57:11.353141   49883 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0930 11:57:11.353231   49883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0930 11:57:11.369911   49883 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0930 11:57:11.369929   49883 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0930 11:57:11.370015   49883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0930 11:57:11.370023   49883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0930 11:57:11.370062   49883 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0930 11:57:11.370081   49883 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0930 11:57:11.370124   49883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0930 11:57:11.370136   49883 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0930 11:57:11.370142   49883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0930 11:57:11.370160   49883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0930 11:57:11.405739   49883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0930 11:57:11.406133   49883 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0930 11:57:11.406223   49883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0930 11:57:14.667943   49883 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (3.29775914s)
	I0930 11:57:14.667975   49883 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0930 11:57:14.668001   49883 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0930 11:57:14.668049   49883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0930 11:57:14.668048   49883 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.298003423s)
	I0930 11:57:14.668089   49883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0930 11:57:14.668149   49883 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (3.298114499s)
	I0930 11:57:14.668176   49883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0930 11:57:14.668239   49883 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (3.298079341s)
	I0930 11:57:14.668264   49883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0930 11:57:14.668290   49883 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (3.298206023s)
	I0930 11:57:14.668323   49883 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0930 11:57:14.668340   49883 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.262102938s)
	I0930 11:57:14.668358   49883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0930 11:57:14.668400   49883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0930 11:57:15.114639   49883 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0930 11:57:15.114681   49883 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0930 11:57:15.114745   49883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0930 11:57:15.114750   49883 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0930 11:57:15.863066   49883 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0930 11:57:15.863111   49883 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0930 11:57:15.863159   49883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0930 11:57:18.022885   49883 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.15969935s)
	I0930 11:57:18.022926   49883 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0930 11:57:18.022964   49883 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0930 11:57:18.023017   49883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0930 11:57:18.167309   49883 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0930 11:57:18.167355   49883 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0930 11:57:18.167402   49883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0930 11:57:18.921061   49883 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0930 11:57:18.921112   49883 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0930 11:57:18.921154   49883 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0930 11:57:19.768563   49883 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0930 11:57:19.768623   49883 cache_images.go:123] Successfully loaded all cached images
	I0930 11:57:19.768632   49883 cache_images.go:92] duration metric: took 9.278175443s to LoadCachedImages
	I0930 11:57:19.768647   49883 kubeadm.go:934] updating node { 192.168.39.169 8443 v1.24.4 crio true true} ...
	I0930 11:57:19.768753   49883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-299114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-299114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 11:57:19.768829   49883 ssh_runner.go:195] Run: crio config
	I0930 11:57:19.828334   49883 cni.go:84] Creating CNI manager for ""
	I0930 11:57:19.828356   49883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 11:57:19.828365   49883 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 11:57:19.828382   49883 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.169 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-299114 NodeName:test-preload-299114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 11:57:19.828505   49883 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-299114"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.169
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.169"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 11:57:19.828563   49883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0930 11:57:19.839226   49883 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 11:57:19.839333   49883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 11:57:19.849888   49883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0930 11:57:19.867084   49883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 11:57:19.884588   49883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0930 11:57:19.902916   49883 ssh_runner.go:195] Run: grep 192.168.39.169	control-plane.minikube.internal$ /etc/hosts
	I0930 11:57:19.907187   49883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 11:57:19.920689   49883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:57:20.048962   49883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:57:20.067516   49883 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114 for IP: 192.168.39.169
	I0930 11:57:20.067541   49883 certs.go:194] generating shared ca certs ...
	I0930 11:57:20.067560   49883 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:57:20.067718   49883 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 11:57:20.067785   49883 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 11:57:20.067798   49883 certs.go:256] generating profile certs ...
	I0930 11:57:20.067902   49883 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/client.key
	I0930 11:57:20.067984   49883 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/apiserver.key.23677660
	I0930 11:57:20.068027   49883 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/proxy-client.key
	I0930 11:57:20.068186   49883 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 11:57:20.068230   49883 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 11:57:20.068244   49883 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 11:57:20.068286   49883 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 11:57:20.068317   49883 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 11:57:20.068349   49883 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 11:57:20.068400   49883 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 11:57:20.069233   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 11:57:20.110284   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 11:57:20.148731   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 11:57:20.185651   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 11:57:20.216928   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 11:57:20.252895   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0930 11:57:20.287534   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 11:57:20.314568   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 11:57:20.340764   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 11:57:20.366824   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 11:57:20.392919   49883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 11:57:20.418612   49883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 11:57:20.436569   49883 ssh_runner.go:195] Run: openssl version
	I0930 11:57:20.442781   49883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 11:57:20.454712   49883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 11:57:20.459732   49883 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 11:57:20.459809   49883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 11:57:20.465940   49883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 11:57:20.477815   49883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 11:57:20.489559   49883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:57:20.494827   49883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:57:20.494905   49883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 11:57:20.501463   49883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 11:57:20.513149   49883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 11:57:20.524888   49883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 11:57:20.529693   49883 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 11:57:20.529753   49883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 11:57:20.535517   49883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 11:57:20.547171   49883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 11:57:20.551979   49883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 11:57:20.558257   49883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 11:57:20.564534   49883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 11:57:20.570796   49883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 11:57:20.577126   49883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 11:57:20.583626   49883 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 11:57:20.589818   49883 kubeadm.go:392] StartCluster: {Name:test-preload-299114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-299114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:57:20.589910   49883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 11:57:20.589963   49883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:57:20.628963   49883 cri.go:89] found id: ""
	I0930 11:57:20.629029   49883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 11:57:20.640451   49883 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 11:57:20.640474   49883 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 11:57:20.640542   49883 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 11:57:20.651074   49883 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:57:20.651515   49883 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-299114" does not appear in /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:57:20.651658   49883 kubeconfig.go:62] /home/jenkins/minikube-integration/19734-3842/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-299114" cluster setting kubeconfig missing "test-preload-299114" context setting]
	I0930 11:57:20.651945   49883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:57:20.652535   49883 kapi.go:59] client config for test-preload-299114: &rest.Config{Host:"https://192.168.39.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:57:20.653191   49883 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 11:57:20.664318   49883 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.169
	I0930 11:57:20.664358   49883 kubeadm.go:1160] stopping kube-system containers ...
	I0930 11:57:20.664372   49883 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 11:57:20.664446   49883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 11:57:20.701787   49883 cri.go:89] found id: ""
	I0930 11:57:20.701867   49883 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 11:57:20.720395   49883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 11:57:20.731329   49883 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 11:57:20.731361   49883 kubeadm.go:157] found existing configuration files:
	
	I0930 11:57:20.731422   49883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 11:57:20.741940   49883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 11:57:20.742017   49883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 11:57:20.752892   49883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 11:57:20.763092   49883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 11:57:20.763167   49883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 11:57:20.773854   49883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 11:57:20.783857   49883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 11:57:20.783921   49883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 11:57:20.794044   49883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 11:57:20.803972   49883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 11:57:20.804026   49883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 11:57:20.814448   49883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 11:57:20.825289   49883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 11:57:20.936762   49883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 11:57:21.881581   49883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 11:57:22.150713   49883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 11:57:22.217713   49883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 11:57:22.291380   49883 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:57:22.291468   49883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:57:22.792233   49883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:57:23.291891   49883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:57:23.346429   49883 api_server.go:72] duration metric: took 1.055046077s to wait for apiserver process to appear ...
	I0930 11:57:23.346459   49883 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:57:23.346483   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:23.347081   49883 api_server.go:269] stopped: https://192.168.39.169:8443/healthz: Get "https://192.168.39.169:8443/healthz": dial tcp 192.168.39.169:8443: connect: connection refused
	I0930 11:57:23.846648   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:23.847252   49883 api_server.go:269] stopped: https://192.168.39.169:8443/healthz: Get "https://192.168.39.169:8443/healthz": dial tcp 192.168.39.169:8443: connect: connection refused
	I0930 11:57:24.347389   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:27.251575   49883 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 11:57:27.251612   49883 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 11:57:27.251631   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:27.286585   49883 api_server.go:279] https://192.168.39.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0930 11:57:27.286610   49883 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0930 11:57:27.346811   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:27.375595   49883 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 11:57:27.375620   49883 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 11:57:27.847338   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:27.852932   49883 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 11:57:27.852971   49883 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 11:57:28.347587   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:28.371989   49883 api_server.go:279] https://192.168.39.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0930 11:57:28.372027   49883 api_server.go:103] status: https://192.168.39.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0930 11:57:28.846561   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:28.854912   49883 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I0930 11:57:28.863971   49883 api_server.go:141] control plane version: v1.24.4
	I0930 11:57:28.863998   49883 api_server.go:131] duration metric: took 5.517532s to wait for apiserver health ...
	I0930 11:57:28.864006   49883 cni.go:84] Creating CNI manager for ""
	I0930 11:57:28.864013   49883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 11:57:28.865805   49883 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 11:57:28.866989   49883 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 11:57:28.878429   49883 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 11:57:28.918616   49883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:57:28.918708   49883 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0930 11:57:28.918732   49883 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0930 11:57:28.933580   49883 system_pods.go:59] 8 kube-system pods found
	I0930 11:57:28.933649   49883 system_pods.go:61] "coredns-6d4b75cb6d-77fdr" [63737ae3-7dfb-494d-b1f0-3514305f7d8d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0930 11:57:28.933660   49883 system_pods.go:61] "coredns-6d4b75cb6d-9p747" [d3ca5452-59e4-4b4a-b628-92952c07c82f] Running
	I0930 11:57:28.933671   49883 system_pods.go:61] "etcd-test-preload-299114" [5ef6423d-d45a-4d1e-b19d-83e304c31a39] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0930 11:57:28.933676   49883 system_pods.go:61] "kube-apiserver-test-preload-299114" [c7f7ffdc-09a4-4b4c-8f94-bc2fcedd61df] Running
	I0930 11:57:28.933683   49883 system_pods.go:61] "kube-controller-manager-test-preload-299114" [f86c5029-7b60-4c53-becb-f226681d01a2] Running
	I0930 11:57:28.933689   49883 system_pods.go:61] "kube-proxy-2jtb6" [5912de81-1463-429e-a5c9-9be973f341a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0930 11:57:28.933694   49883 system_pods.go:61] "kube-scheduler-test-preload-299114" [dc70061f-6a51-4333-bcb6-ec7ce3afabff] Running
	I0930 11:57:28.933705   49883 system_pods.go:61] "storage-provisioner" [6b91f25b-c36e-4f3b-8555-d91523973fd4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0930 11:57:28.933719   49883 system_pods.go:74] duration metric: took 15.074773ms to wait for pod list to return data ...
	I0930 11:57:28.933729   49883 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:57:28.937703   49883 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:57:28.937738   49883 node_conditions.go:123] node cpu capacity is 2
	I0930 11:57:28.937751   49883 node_conditions.go:105] duration metric: took 4.013939ms to run NodePressure ...
	I0930 11:57:28.937772   49883 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 11:57:29.154178   49883 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0930 11:57:29.158403   49883 kubeadm.go:739] kubelet initialised
	I0930 11:57:29.158425   49883 kubeadm.go:740] duration metric: took 4.221622ms waiting for restarted kubelet to initialise ...
	I0930 11:57:29.158432   49883 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:57:29.165225   49883 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-77fdr" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:29.171625   49883 pod_ready.go:98] node "test-preload-299114" hosting pod "coredns-6d4b75cb6d-77fdr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.171650   49883 pod_ready.go:82] duration metric: took 6.40184ms for pod "coredns-6d4b75cb6d-77fdr" in "kube-system" namespace to be "Ready" ...
	E0930 11:57:29.171659   49883 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-299114" hosting pod "coredns-6d4b75cb6d-77fdr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.171665   49883 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9p747" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:29.178820   49883 pod_ready.go:98] node "test-preload-299114" hosting pod "coredns-6d4b75cb6d-9p747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.178859   49883 pod_ready.go:82] duration metric: took 7.177634ms for pod "coredns-6d4b75cb6d-9p747" in "kube-system" namespace to be "Ready" ...
	E0930 11:57:29.178871   49883 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-299114" hosting pod "coredns-6d4b75cb6d-9p747" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.178880   49883 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:29.184647   49883 pod_ready.go:98] node "test-preload-299114" hosting pod "etcd-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.184670   49883 pod_ready.go:82] duration metric: took 5.78169ms for pod "etcd-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	E0930 11:57:29.184679   49883 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-299114" hosting pod "etcd-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.184685   49883 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:29.322343   49883 pod_ready.go:98] node "test-preload-299114" hosting pod "kube-apiserver-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.322368   49883 pod_ready.go:82] duration metric: took 137.674726ms for pod "kube-apiserver-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	E0930 11:57:29.322377   49883 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-299114" hosting pod "kube-apiserver-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.322383   49883 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:29.723488   49883 pod_ready.go:98] node "test-preload-299114" hosting pod "kube-controller-manager-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.723518   49883 pod_ready.go:82] duration metric: took 401.125723ms for pod "kube-controller-manager-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	E0930 11:57:29.723527   49883 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-299114" hosting pod "kube-controller-manager-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:29.723533   49883 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2jtb6" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:30.121876   49883 pod_ready.go:98] node "test-preload-299114" hosting pod "kube-proxy-2jtb6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:30.121901   49883 pod_ready.go:82] duration metric: took 398.358101ms for pod "kube-proxy-2jtb6" in "kube-system" namespace to be "Ready" ...
	E0930 11:57:30.121911   49883 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-299114" hosting pod "kube-proxy-2jtb6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:30.121918   49883 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:30.523661   49883 pod_ready.go:98] node "test-preload-299114" hosting pod "kube-scheduler-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:30.523693   49883 pod_ready.go:82] duration metric: took 401.766924ms for pod "kube-scheduler-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	E0930 11:57:30.523705   49883 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-299114" hosting pod "kube-scheduler-test-preload-299114" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:30.523714   49883 pod_ready.go:39] duration metric: took 1.365273591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:57:30.523736   49883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 11:57:30.536517   49883 ops.go:34] apiserver oom_adj: -16
	I0930 11:57:30.536538   49883 kubeadm.go:597] duration metric: took 9.896058165s to restartPrimaryControlPlane
	I0930 11:57:30.536547   49883 kubeadm.go:394] duration metric: took 9.946736284s to StartCluster
	I0930 11:57:30.536561   49883 settings.go:142] acquiring lock: {Name:mkdbb7ee3f4e112a79c58917f833dfd72cc7c3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:57:30.536627   49883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:57:30.537222   49883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/kubeconfig: {Name:mkbcc26962ad9a46a600e5f0a5facf24bf9d408d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 11:57:30.537463   49883 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.169 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 11:57:30.537525   49883 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0930 11:57:30.537635   49883 addons.go:69] Setting storage-provisioner=true in profile "test-preload-299114"
	I0930 11:57:30.537652   49883 config.go:182] Loaded profile config "test-preload-299114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0930 11:57:30.537656   49883 addons.go:234] Setting addon storage-provisioner=true in "test-preload-299114"
	W0930 11:57:30.537713   49883 addons.go:243] addon storage-provisioner should already be in state true
	I0930 11:57:30.537733   49883 host.go:66] Checking if "test-preload-299114" exists ...
	I0930 11:57:30.537656   49883 addons.go:69] Setting default-storageclass=true in profile "test-preload-299114"
	I0930 11:57:30.537770   49883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-299114"
	I0930 11:57:30.538023   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:57:30.538058   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:57:30.538222   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:57:30.538268   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:57:30.539393   49883 out.go:177] * Verifying Kubernetes components...
	I0930 11:57:30.540770   49883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 11:57:30.553447   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37819
	I0930 11:57:30.553461   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0930 11:57:30.553861   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:57:30.553863   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:57:30.554341   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:57:30.554357   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:57:30.554482   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:57:30.554499   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:57:30.554701   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:57:30.554814   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:57:30.554978   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetState
	I0930 11:57:30.555276   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:57:30.555324   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:57:30.557354   49883 kapi.go:59] client config for test-preload-299114: &rest.Config{Host:"https://192.168.39.169:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/client.crt", KeyFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/profiles/test-preload-299114/client.key", CAFile:"/home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0930 11:57:30.557733   49883 addons.go:234] Setting addon default-storageclass=true in "test-preload-299114"
	W0930 11:57:30.557761   49883 addons.go:243] addon default-storageclass should already be in state true
	I0930 11:57:30.557791   49883 host.go:66] Checking if "test-preload-299114" exists ...
	I0930 11:57:30.558154   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:57:30.558196   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:57:30.572947   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0930 11:57:30.573510   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:57:30.574128   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:57:30.574155   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:57:30.574234   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39085
	I0930 11:57:30.574542   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:57:30.574592   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:57:30.575013   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:57:30.575045   49883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:57:30.575066   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:57:30.575070   49883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:57:30.575408   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:57:30.575577   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetState
	I0930 11:57:30.577009   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:30.578933   49883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 11:57:30.580298   49883 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:57:30.580316   49883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 11:57:30.580337   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:30.583252   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:30.583659   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:30.583699   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:30.583866   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:30.584054   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:30.584210   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:30.584315   49883 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa Username:docker}
	I0930 11:57:30.613109   49883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0930 11:57:30.613570   49883 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:57:30.614083   49883 main.go:141] libmachine: Using API Version  1
	I0930 11:57:30.614104   49883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:57:30.614468   49883 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:57:30.614682   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetState
	I0930 11:57:30.616255   49883 main.go:141] libmachine: (test-preload-299114) Calling .DriverName
	I0930 11:57:30.616458   49883 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 11:57:30.616478   49883 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 11:57:30.616500   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHHostname
	I0930 11:57:30.619244   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:30.619721   49883 main.go:141] libmachine: (test-preload-299114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:4a:2e", ip: ""} in network mk-test-preload-299114: {Iface:virbr1 ExpiryTime:2024-09-30 12:56:55 +0000 UTC Type:0 Mac:52:54:00:81:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:test-preload-299114 Clientid:01:52:54:00:81:4a:2e}
	I0930 11:57:30.619747   49883 main.go:141] libmachine: (test-preload-299114) DBG | domain test-preload-299114 has defined IP address 192.168.39.169 and MAC address 52:54:00:81:4a:2e in network mk-test-preload-299114
	I0930 11:57:30.619967   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHPort
	I0930 11:57:30.620153   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHKeyPath
	I0930 11:57:30.620384   49883 main.go:141] libmachine: (test-preload-299114) Calling .GetSSHUsername
	I0930 11:57:30.620517   49883 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/test-preload-299114/id_rsa Username:docker}
	I0930 11:57:30.728327   49883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 11:57:30.746186   49883 node_ready.go:35] waiting up to 6m0s for node "test-preload-299114" to be "Ready" ...
	I0930 11:57:30.824214   49883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 11:57:30.875732   49883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 11:57:31.850433   49883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026169798s)
	I0930 11:57:31.850471   49883 main.go:141] libmachine: Making call to close driver server
	I0930 11:57:31.850481   49883 main.go:141] libmachine: (test-preload-299114) Calling .Close
	I0930 11:57:31.850572   49883 main.go:141] libmachine: Making call to close driver server
	I0930 11:57:31.850598   49883 main.go:141] libmachine: (test-preload-299114) Calling .Close
	I0930 11:57:31.850820   49883 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:57:31.850839   49883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:57:31.850848   49883 main.go:141] libmachine: Making call to close driver server
	I0930 11:57:31.850852   49883 main.go:141] libmachine: (test-preload-299114) DBG | Closing plugin on server side
	I0930 11:57:31.850875   49883 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:57:31.850887   49883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:57:31.850894   49883 main.go:141] libmachine: Making call to close driver server
	I0930 11:57:31.850901   49883 main.go:141] libmachine: (test-preload-299114) Calling .Close
	I0930 11:57:31.850855   49883 main.go:141] libmachine: (test-preload-299114) Calling .Close
	I0930 11:57:31.851093   49883 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:57:31.851106   49883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:57:31.851110   49883 main.go:141] libmachine: (test-preload-299114) DBG | Closing plugin on server side
	I0930 11:57:31.851128   49883 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:57:31.851140   49883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:57:31.851141   49883 main.go:141] libmachine: (test-preload-299114) DBG | Closing plugin on server side
	I0930 11:57:31.857390   49883 main.go:141] libmachine: Making call to close driver server
	I0930 11:57:31.857407   49883 main.go:141] libmachine: (test-preload-299114) Calling .Close
	I0930 11:57:31.857681   49883 main.go:141] libmachine: Successfully made call to close driver server
	I0930 11:57:31.857700   49883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0930 11:57:31.859778   49883 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0930 11:57:31.860919   49883 addons.go:510] duration metric: took 1.32339611s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0930 11:57:32.749991   49883 node_ready.go:53] node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:34.750459   49883 node_ready.go:53] node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:36.750898   49883 node_ready.go:53] node "test-preload-299114" has status "Ready":"False"
	I0930 11:57:37.749950   49883 node_ready.go:49] node "test-preload-299114" has status "Ready":"True"
	I0930 11:57:37.749977   49883 node_ready.go:38] duration metric: took 7.003754674s for node "test-preload-299114" to be "Ready" ...
	I0930 11:57:37.749987   49883 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:57:37.754999   49883 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-77fdr" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:37.760355   49883 pod_ready.go:93] pod "coredns-6d4b75cb6d-77fdr" in "kube-system" namespace has status "Ready":"True"
	I0930 11:57:37.760381   49883 pod_ready.go:82] duration metric: took 5.356343ms for pod "coredns-6d4b75cb6d-77fdr" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:37.760391   49883 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:39.766865   49883 pod_ready.go:103] pod "etcd-test-preload-299114" in "kube-system" namespace has status "Ready":"False"
	I0930 11:57:41.768024   49883 pod_ready.go:103] pod "etcd-test-preload-299114" in "kube-system" namespace has status "Ready":"False"
	I0930 11:57:43.267243   49883 pod_ready.go:93] pod "etcd-test-preload-299114" in "kube-system" namespace has status "Ready":"True"
	I0930 11:57:43.267263   49883 pod_ready.go:82] duration metric: took 5.506864697s for pod "etcd-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.267273   49883 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.272019   49883 pod_ready.go:93] pod "kube-apiserver-test-preload-299114" in "kube-system" namespace has status "Ready":"True"
	I0930 11:57:43.272037   49883 pod_ready.go:82] duration metric: took 4.758533ms for pod "kube-apiserver-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.272045   49883 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.276008   49883 pod_ready.go:93] pod "kube-controller-manager-test-preload-299114" in "kube-system" namespace has status "Ready":"True"
	I0930 11:57:43.276023   49883 pod_ready.go:82] duration metric: took 3.972183ms for pod "kube-controller-manager-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.276031   49883 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2jtb6" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.279656   49883 pod_ready.go:93] pod "kube-proxy-2jtb6" in "kube-system" namespace has status "Ready":"True"
	I0930 11:57:43.279673   49883 pod_ready.go:82] duration metric: took 3.636918ms for pod "kube-proxy-2jtb6" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.279685   49883 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.283332   49883 pod_ready.go:93] pod "kube-scheduler-test-preload-299114" in "kube-system" namespace has status "Ready":"True"
	I0930 11:57:43.283350   49883 pod_ready.go:82] duration metric: took 3.658825ms for pod "kube-scheduler-test-preload-299114" in "kube-system" namespace to be "Ready" ...
	I0930 11:57:43.283359   49883 pod_ready.go:39] duration metric: took 5.533360081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 11:57:43.283371   49883 api_server.go:52] waiting for apiserver process to appear ...
	I0930 11:57:43.283421   49883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:57:43.298820   49883 api_server.go:72] duration metric: took 12.761325246s to wait for apiserver process to appear ...
	I0930 11:57:43.298852   49883 api_server.go:88] waiting for apiserver healthz status ...
	I0930 11:57:43.298884   49883 api_server.go:253] Checking apiserver healthz at https://192.168.39.169:8443/healthz ...
	I0930 11:57:43.304158   49883 api_server.go:279] https://192.168.39.169:8443/healthz returned 200:
	ok
	I0930 11:57:43.305008   49883 api_server.go:141] control plane version: v1.24.4
	I0930 11:57:43.305033   49883 api_server.go:131] duration metric: took 6.166154ms to wait for apiserver health ...
	I0930 11:57:43.305041   49883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 11:57:43.466677   49883 system_pods.go:59] 7 kube-system pods found
	I0930 11:57:43.466707   49883 system_pods.go:61] "coredns-6d4b75cb6d-77fdr" [63737ae3-7dfb-494d-b1f0-3514305f7d8d] Running
	I0930 11:57:43.466713   49883 system_pods.go:61] "etcd-test-preload-299114" [5ef6423d-d45a-4d1e-b19d-83e304c31a39] Running
	I0930 11:57:43.466719   49883 system_pods.go:61] "kube-apiserver-test-preload-299114" [c7f7ffdc-09a4-4b4c-8f94-bc2fcedd61df] Running
	I0930 11:57:43.466725   49883 system_pods.go:61] "kube-controller-manager-test-preload-299114" [f86c5029-7b60-4c53-becb-f226681d01a2] Running
	I0930 11:57:43.466731   49883 system_pods.go:61] "kube-proxy-2jtb6" [5912de81-1463-429e-a5c9-9be973f341a4] Running
	I0930 11:57:43.466735   49883 system_pods.go:61] "kube-scheduler-test-preload-299114" [dc70061f-6a51-4333-bcb6-ec7ce3afabff] Running
	I0930 11:57:43.466739   49883 system_pods.go:61] "storage-provisioner" [6b91f25b-c36e-4f3b-8555-d91523973fd4] Running
	I0930 11:57:43.466747   49883 system_pods.go:74] duration metric: took 161.699655ms to wait for pod list to return data ...
	I0930 11:57:43.466755   49883 default_sa.go:34] waiting for default service account to be created ...
	I0930 11:57:43.663815   49883 default_sa.go:45] found service account: "default"
	I0930 11:57:43.663841   49883 default_sa.go:55] duration metric: took 197.078613ms for default service account to be created ...
	I0930 11:57:43.663851   49883 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 11:57:43.869154   49883 system_pods.go:86] 7 kube-system pods found
	I0930 11:57:43.869191   49883 system_pods.go:89] "coredns-6d4b75cb6d-77fdr" [63737ae3-7dfb-494d-b1f0-3514305f7d8d] Running
	I0930 11:57:43.869201   49883 system_pods.go:89] "etcd-test-preload-299114" [5ef6423d-d45a-4d1e-b19d-83e304c31a39] Running
	I0930 11:57:43.869207   49883 system_pods.go:89] "kube-apiserver-test-preload-299114" [c7f7ffdc-09a4-4b4c-8f94-bc2fcedd61df] Running
	I0930 11:57:43.869214   49883 system_pods.go:89] "kube-controller-manager-test-preload-299114" [f86c5029-7b60-4c53-becb-f226681d01a2] Running
	I0930 11:57:43.869220   49883 system_pods.go:89] "kube-proxy-2jtb6" [5912de81-1463-429e-a5c9-9be973f341a4] Running
	I0930 11:57:43.869227   49883 system_pods.go:89] "kube-scheduler-test-preload-299114" [dc70061f-6a51-4333-bcb6-ec7ce3afabff] Running
	I0930 11:57:43.869234   49883 system_pods.go:89] "storage-provisioner" [6b91f25b-c36e-4f3b-8555-d91523973fd4] Running
	I0930 11:57:43.869256   49883 system_pods.go:126] duration metric: took 205.385808ms to wait for k8s-apps to be running ...
	I0930 11:57:43.869267   49883 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 11:57:43.869320   49883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:57:43.884722   49883 system_svc.go:56] duration metric: took 15.448424ms WaitForService to wait for kubelet
	I0930 11:57:43.884751   49883 kubeadm.go:582] duration metric: took 13.347260289s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 11:57:43.884772   49883 node_conditions.go:102] verifying NodePressure condition ...
	I0930 11:57:44.066226   49883 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0930 11:57:44.066261   49883 node_conditions.go:123] node cpu capacity is 2
	I0930 11:57:44.066270   49883 node_conditions.go:105] duration metric: took 181.492805ms to run NodePressure ...
	I0930 11:57:44.066280   49883 start.go:241] waiting for startup goroutines ...
	I0930 11:57:44.066286   49883 start.go:246] waiting for cluster config update ...
	I0930 11:57:44.066296   49883 start.go:255] writing updated cluster config ...
	I0930 11:57:44.066551   49883 ssh_runner.go:195] Run: rm -f paused
	I0930 11:57:44.114526   49883 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0930 11:57:44.116697   49883 out.go:201] 
	W0930 11:57:44.118156   49883 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0930 11:57:44.119809   49883 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0930 11:57:44.121396   49883 out.go:177] * Done! kubectl is now configured to use "test-preload-299114" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.022312893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697465022291244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17db135f-315e-4a04-9291-83b513ec62f8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.022964400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=466c201b-1bb4-4dfa-b7c0-0e9463f5b944 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.023041051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=466c201b-1bb4-4dfa-b7c0-0e9463f5b944 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.023235086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44591cdd0af60ef73d9bdc40a4c442ac2aafd30defe8713517bf91bfd929a14b,PodSandboxId:942d5879f310c05beb0ad2a82d87e5750501b9b44f1512650a4fa0d25d5133af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727697455407699820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-77fdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63737ae3-7dfb-494d-b1f0-3514305f7d8d,},Annotations:map[string]string{io.kubernetes.container.hash: 10799ef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4532038eb636dcc8968bc826721c88e7c00d8f1bd1ac13dcd85ce3ef214809f,PodSandboxId:72769e850627132e0f16f4d89881d67c1aa03f6dfc8360ed94a86a6023ccfe54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727697448345338659,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2jtb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5912de81-1463-429e-a5c9-9be973f341a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ace8963,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e057593b71e60c9cd369ace7659fda6067dbbd31b53678cb67dd75eeac1da4,PodSandboxId:1e8c6dc0d66cf254dcc568572d80a0d937bf07b07b8889924d4f558731e60f28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727697448033688077,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
91f25b-c36e-4f3b-8555-d91523973fd4,},Annotations:map[string]string{io.kubernetes.container.hash: c43f027d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b18f8c5d1b850a63364f68d759fc1eba5b81e1ebce6d2746eb2b99f9ddb7e1d,PodSandboxId:2332f366cf7ef327bb3b158011f3637fbf7b8f24870f37acd2cce74ea7bda546,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727697443080697519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71358cf82c664c120bd174dc2bc734f2,},Anno
tations:map[string]string{io.kubernetes.container.hash: db77d4a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2876fb92c2f6e3e617fb70c783a00c8906cff5e43ebcff10b5258e9f0acd93f,PodSandboxId:a2d4e5479c792441dfe017cd6265c7485e44756498daa3ea293a6116aa62cd2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727697443057880370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02545a7adf8fb08eab9034753e1d9f6b,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97b82d3c7d455199a7410bcf4665b0fc0380adc496ace9649e6ce4936e46550,PodSandboxId:fa2a811959b5d1f2622d8e772778ff1520a20a7023138654bf1ac36aa13ea358,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727697443021299128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e234263772a6326ff9c1d778335d1e72,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa49dcbb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43557d06b6ea43ed03a39cdd73bd10b8a1a63a75990bcf4861b8d00139f07a58,PodSandboxId:d0a40b75b45241a36715e0ee8c7e08c029806a0b3f494ec38f0359f4ff35f835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727697443033533213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f6fae40103b1b2a651e8af5f212ae2,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=466c201b-1bb4-4dfa-b7c0-0e9463f5b944 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.061802998Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dad26ac3-95a8-4b55-acc8-f3daa32f9017 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.061896159Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dad26ac3-95a8-4b55-acc8-f3daa32f9017 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.063250559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e715b6da-9052-4444-8418-5d20d616df0c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.063697407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697465063673989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e715b6da-9052-4444-8418-5d20d616df0c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.064328259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08e30506-f38e-428b-9bf7-d90508e9d37c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.064611777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08e30506-f38e-428b-9bf7-d90508e9d37c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.065383260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44591cdd0af60ef73d9bdc40a4c442ac2aafd30defe8713517bf91bfd929a14b,PodSandboxId:942d5879f310c05beb0ad2a82d87e5750501b9b44f1512650a4fa0d25d5133af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727697455407699820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-77fdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63737ae3-7dfb-494d-b1f0-3514305f7d8d,},Annotations:map[string]string{io.kubernetes.container.hash: 10799ef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4532038eb636dcc8968bc826721c88e7c00d8f1bd1ac13dcd85ce3ef214809f,PodSandboxId:72769e850627132e0f16f4d89881d67c1aa03f6dfc8360ed94a86a6023ccfe54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727697448345338659,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2jtb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5912de81-1463-429e-a5c9-9be973f341a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ace8963,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e057593b71e60c9cd369ace7659fda6067dbbd31b53678cb67dd75eeac1da4,PodSandboxId:1e8c6dc0d66cf254dcc568572d80a0d937bf07b07b8889924d4f558731e60f28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727697448033688077,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
91f25b-c36e-4f3b-8555-d91523973fd4,},Annotations:map[string]string{io.kubernetes.container.hash: c43f027d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b18f8c5d1b850a63364f68d759fc1eba5b81e1ebce6d2746eb2b99f9ddb7e1d,PodSandboxId:2332f366cf7ef327bb3b158011f3637fbf7b8f24870f37acd2cce74ea7bda546,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727697443080697519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71358cf82c664c120bd174dc2bc734f2,},Anno
tations:map[string]string{io.kubernetes.container.hash: db77d4a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2876fb92c2f6e3e617fb70c783a00c8906cff5e43ebcff10b5258e9f0acd93f,PodSandboxId:a2d4e5479c792441dfe017cd6265c7485e44756498daa3ea293a6116aa62cd2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727697443057880370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02545a7adf8fb08eab9034753e1d9f6b,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97b82d3c7d455199a7410bcf4665b0fc0380adc496ace9649e6ce4936e46550,PodSandboxId:fa2a811959b5d1f2622d8e772778ff1520a20a7023138654bf1ac36aa13ea358,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727697443021299128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e234263772a6326ff9c1d778335d1e72,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa49dcbb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43557d06b6ea43ed03a39cdd73bd10b8a1a63a75990bcf4861b8d00139f07a58,PodSandboxId:d0a40b75b45241a36715e0ee8c7e08c029806a0b3f494ec38f0359f4ff35f835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727697443033533213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f6fae40103b1b2a651e8af5f212ae2,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08e30506-f38e-428b-9bf7-d90508e9d37c name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.108857542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dbef6a6d-29ec-4e2f-b1b6-a2085e28ea87 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.108944963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dbef6a6d-29ec-4e2f-b1b6-a2085e28ea87 name=/runtime.v1.RuntimeService/Version
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.110072241Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f55efbf6-55c5-4ce7-8087-eb9cfc4fdc50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.110616402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697465110591829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f55efbf6-55c5-4ce7-8087-eb9cfc4fdc50 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.111082864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f84313a7-b796-4628-a0b5-10405daac3ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.111176089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f84313a7-b796-4628-a0b5-10405daac3ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.111338236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44591cdd0af60ef73d9bdc40a4c442ac2aafd30defe8713517bf91bfd929a14b,PodSandboxId:942d5879f310c05beb0ad2a82d87e5750501b9b44f1512650a4fa0d25d5133af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727697455407699820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-77fdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63737ae3-7dfb-494d-b1f0-3514305f7d8d,},Annotations:map[string]string{io.kubernetes.container.hash: 10799ef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4532038eb636dcc8968bc826721c88e7c00d8f1bd1ac13dcd85ce3ef214809f,PodSandboxId:72769e850627132e0f16f4d89881d67c1aa03f6dfc8360ed94a86a6023ccfe54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727697448345338659,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2jtb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5912de81-1463-429e-a5c9-9be973f341a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ace8963,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e057593b71e60c9cd369ace7659fda6067dbbd31b53678cb67dd75eeac1da4,PodSandboxId:1e8c6dc0d66cf254dcc568572d80a0d937bf07b07b8889924d4f558731e60f28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727697448033688077,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
91f25b-c36e-4f3b-8555-d91523973fd4,},Annotations:map[string]string{io.kubernetes.container.hash: c43f027d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b18f8c5d1b850a63364f68d759fc1eba5b81e1ebce6d2746eb2b99f9ddb7e1d,PodSandboxId:2332f366cf7ef327bb3b158011f3637fbf7b8f24870f37acd2cce74ea7bda546,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727697443080697519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71358cf82c664c120bd174dc2bc734f2,},Anno
tations:map[string]string{io.kubernetes.container.hash: db77d4a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2876fb92c2f6e3e617fb70c783a00c8906cff5e43ebcff10b5258e9f0acd93f,PodSandboxId:a2d4e5479c792441dfe017cd6265c7485e44756498daa3ea293a6116aa62cd2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727697443057880370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02545a7adf8fb08eab9034753e1d9f6b,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97b82d3c7d455199a7410bcf4665b0fc0380adc496ace9649e6ce4936e46550,PodSandboxId:fa2a811959b5d1f2622d8e772778ff1520a20a7023138654bf1ac36aa13ea358,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727697443021299128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e234263772a6326ff9c1d778335d1e72,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa49dcbb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43557d06b6ea43ed03a39cdd73bd10b8a1a63a75990bcf4861b8d00139f07a58,PodSandboxId:d0a40b75b45241a36715e0ee8c7e08c029806a0b3f494ec38f0359f4ff35f835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727697443033533213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f6fae40103b1b2a651e8af5f212ae2,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f84313a7-b796-4628-a0b5-10405daac3ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.146597520Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7977f97-75d3-4e90-a099-de5e6404261a name=/runtime.v1.RuntimeService/Version
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.146688215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7977f97-75d3-4e90-a099-de5e6404261a name=/runtime.v1.RuntimeService/Version
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.148155671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24b0fc3b-93c9-4d9f-a878-5e49eec057fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.148587413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727697465148567472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24b0fc3b-93c9-4d9f-a878-5e49eec057fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.149212321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e8f7c31-52fa-40c3-8c01-3db4dead7091 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.149268398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e8f7c31-52fa-40c3-8c01-3db4dead7091 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 11:57:45 test-preload-299114 crio[683]: time="2024-09-30 11:57:45.149421150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:44591cdd0af60ef73d9bdc40a4c442ac2aafd30defe8713517bf91bfd929a14b,PodSandboxId:942d5879f310c05beb0ad2a82d87e5750501b9b44f1512650a4fa0d25d5133af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727697455407699820,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-77fdr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63737ae3-7dfb-494d-b1f0-3514305f7d8d,},Annotations:map[string]string{io.kubernetes.container.hash: 10799ef0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4532038eb636dcc8968bc826721c88e7c00d8f1bd1ac13dcd85ce3ef214809f,PodSandboxId:72769e850627132e0f16f4d89881d67c1aa03f6dfc8360ed94a86a6023ccfe54,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727697448345338659,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2jtb6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5912de81-1463-429e-a5c9-9be973f341a4,},Annotations:map[string]string{io.kubernetes.container.hash: 1ace8963,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e057593b71e60c9cd369ace7659fda6067dbbd31b53678cb67dd75eeac1da4,PodSandboxId:1e8c6dc0d66cf254dcc568572d80a0d937bf07b07b8889924d4f558731e60f28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727697448033688077,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
91f25b-c36e-4f3b-8555-d91523973fd4,},Annotations:map[string]string{io.kubernetes.container.hash: c43f027d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b18f8c5d1b850a63364f68d759fc1eba5b81e1ebce6d2746eb2b99f9ddb7e1d,PodSandboxId:2332f366cf7ef327bb3b158011f3637fbf7b8f24870f37acd2cce74ea7bda546,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727697443080697519,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71358cf82c664c120bd174dc2bc734f2,},Anno
tations:map[string]string{io.kubernetes.container.hash: db77d4a8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2876fb92c2f6e3e617fb70c783a00c8906cff5e43ebcff10b5258e9f0acd93f,PodSandboxId:a2d4e5479c792441dfe017cd6265c7485e44756498daa3ea293a6116aa62cd2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727697443057880370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02545a7adf8fb08eab9034753e1d9f6b,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97b82d3c7d455199a7410bcf4665b0fc0380adc496ace9649e6ce4936e46550,PodSandboxId:fa2a811959b5d1f2622d8e772778ff1520a20a7023138654bf1ac36aa13ea358,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727697443021299128,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e234263772a6326ff9c1d778335d1e72,},Annotations:map[string]str
ing{io.kubernetes.container.hash: aa49dcbb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43557d06b6ea43ed03a39cdd73bd10b8a1a63a75990bcf4861b8d00139f07a58,PodSandboxId:d0a40b75b45241a36715e0ee8c7e08c029806a0b3f494ec38f0359f4ff35f835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727697443033533213,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-299114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39f6fae40103b1b2a651e8af5f212ae2,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e8f7c31-52fa-40c3-8c01-3db4dead7091 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	44591cdd0af60       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   942d5879f310c       coredns-6d4b75cb6d-77fdr
	c4532038eb636       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   72769e8506271       kube-proxy-2jtb6
	79e057593b71e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 seconds ago      Running             storage-provisioner       1                   1e8c6dc0d66cf       storage-provisioner
	9b18f8c5d1b85       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   22 seconds ago      Running             etcd                      1                   2332f366cf7ef       etcd-test-preload-299114
	a2876fb92c2f6       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   22 seconds ago      Running             kube-scheduler            1                   a2d4e5479c792       kube-scheduler-test-preload-299114
	43557d06b6ea4       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   22 seconds ago      Running             kube-controller-manager   1                   d0a40b75b4524       kube-controller-manager-test-preload-299114
	c97b82d3c7d45       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   fa2a811959b5d       kube-apiserver-test-preload-299114
	
	
	==> coredns [44591cdd0af60ef73d9bdc40a4c442ac2aafd30defe8713517bf91bfd929a14b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46291 - 52278 "HINFO IN 2934726662939897695.8220370806795330545. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029959322s
	
	
	==> describe nodes <==
	Name:               test-preload-299114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-299114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=test-preload-299114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T11_56_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 11:56:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-299114
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 11:57:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 11:57:37 +0000   Mon, 30 Sep 2024 11:56:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 11:57:37 +0000   Mon, 30 Sep 2024 11:56:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 11:57:37 +0000   Mon, 30 Sep 2024 11:56:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 11:57:37 +0000   Mon, 30 Sep 2024 11:57:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    test-preload-299114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58f6b874f5964b71a94c691ef2d2d42d
	  System UUID:                58f6b874-f596-4b71-a94c-691ef2d2d42d
	  Boot ID:                    2a2a12fe-391f-471a-94d1-ddea67b95db6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-77fdr                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     80s
	  kube-system                 etcd-test-preload-299114                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         92s
	  kube-system                 kube-apiserver-test-preload-299114             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-test-preload-299114    200m (10%)    0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-2jtb6                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-test-preload-299114             100m (5%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 78s                  kube-proxy       
	  Normal  Starting                 16s                  kube-proxy       
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  100s (x3 over 100s)  kubelet          Node test-preload-299114 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     100s (x3 over 100s)  kubelet          Node test-preload-299114 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    100s (x3 over 100s)  kubelet          Node test-preload-299114 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  92s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 92s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  92s                  kubelet          Node test-preload-299114 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                  kubelet          Node test-preload-299114 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                  kubelet          Node test-preload-299114 status is now: NodeHasSufficientPID
	  Normal  NodeReady                82s                  kubelet          Node test-preload-299114 status is now: NodeReady
	  Normal  RegisteredNode           81s                  node-controller  Node test-preload-299114 event: Registered Node test-preload-299114 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-299114 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-299114 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-299114 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node test-preload-299114 event: Registered Node test-preload-299114 in Controller
	
	
	==> dmesg <==
	[Sep30 11:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050531] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039801] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.832319] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.664029] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.627651] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep30 11:57] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.059365] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067460] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.170854] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.144501] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.288981] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[ +13.943382] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.060228] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.032889] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +3.711778] kauditd_printk_skb: 105 callbacks suppressed
	[  +4.811875] systemd-fstab-generator[1771]: Ignoring "noauto" option for root device
	[  +4.614395] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.520717] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [9b18f8c5d1b850a63364f68d759fc1eba5b81e1ebce6d2746eb2b99f9ddb7e1d] <==
	{"level":"info","ts":"2024-09-30T11:57:23.479Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"88d17c48ad0ae483","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-30T11:57:23.484Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-30T11:57:23.508Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T11:57:23.508Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"88d17c48ad0ae483","initial-advertise-peer-urls":["https://192.168.39.169:2380"],"listen-peer-urls":["https://192.168.39.169:2380"],"advertise-client-urls":["https://192.168.39.169:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.169:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T11:57:23.508Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T11:57:23.509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 switched to configuration voters=(9858797710873388163)"}
	{"level":"info","ts":"2024-09-30T11:57:23.510Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dd1030519101f266","local-member-id":"88d17c48ad0ae483","added-peer-id":"88d17c48ad0ae483","added-peer-peer-urls":["https://192.168.39.169:2380"]}
	{"level":"info","ts":"2024-09-30T11:57:23.510Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dd1030519101f266","local-member-id":"88d17c48ad0ae483","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:57:23.510Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T11:57:23.519Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2024-09-30T11:57:23.519Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.169:2380"}
	{"level":"info","ts":"2024-09-30T11:57:24.755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T11:57:24.755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T11:57:24.755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 received MsgPreVoteResp from 88d17c48ad0ae483 at term 2"}
	{"level":"info","ts":"2024-09-30T11:57:24.755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T11:57:24.755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 received MsgVoteResp from 88d17c48ad0ae483 at term 3"}
	{"level":"info","ts":"2024-09-30T11:57:24.755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"88d17c48ad0ae483 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T11:57:24.755Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 88d17c48ad0ae483 elected leader 88d17c48ad0ae483 at term 3"}
	{"level":"info","ts":"2024-09-30T11:57:24.759Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"88d17c48ad0ae483","local-member-attributes":"{Name:test-preload-299114 ClientURLs:[https://192.168.39.169:2379]}","request-path":"/0/members/88d17c48ad0ae483/attributes","cluster-id":"dd1030519101f266","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T11:57:24.759Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:57:24.760Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T11:57:24.761Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.169:2379"}
	{"level":"info","ts":"2024-09-30T11:57:24.761Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T11:57:24.761Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T11:57:24.762Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:57:45 up 0 min,  0 users,  load average: 0.81, 0.23, 0.08
	Linux test-preload-299114 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c97b82d3c7d455199a7410bcf4665b0fc0380adc496ace9649e6ce4936e46550] <==
	I0930 11:57:27.238208       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0930 11:57:27.253913       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0930 11:57:27.253947       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0930 11:57:27.254841       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0930 11:57:27.204867       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0930 11:57:27.274762       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0930 11:57:27.347521       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0930 11:57:27.351780       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0930 11:57:27.355546       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0930 11:57:27.357631       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0930 11:57:27.364910       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0930 11:57:27.406578       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0930 11:57:27.420843       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 11:57:27.433805       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 11:57:27.438055       1 cache.go:39] Caches are synced for autoregister controller
	I0930 11:57:27.892275       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0930 11:57:28.209056       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 11:57:28.745332       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0930 11:57:29.046558       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0930 11:57:29.058625       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0930 11:57:29.102312       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0930 11:57:29.127502       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 11:57:29.137700       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 11:57:40.582093       1 controller.go:611] quota admission added evaluator for: endpoints
	I0930 11:57:40.683439       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [43557d06b6ea43ed03a39cdd73bd10b8a1a63a75990bcf4861b8d00139f07a58] <==
	I0930 11:57:40.476340       1 shared_informer.go:262] Caches are synced for PVC protection
	I0930 11:57:40.477506       1 shared_informer.go:262] Caches are synced for taint
	I0930 11:57:40.477705       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0930 11:57:40.478017       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0930 11:57:40.478154       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-299114. Assuming now as a timestamp.
	I0930 11:57:40.478215       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0930 11:57:40.478285       1 event.go:294] "Event occurred" object="test-preload-299114" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-299114 event: Registered Node test-preload-299114 in Controller"
	I0930 11:57:40.486618       1 shared_informer.go:262] Caches are synced for daemon sets
	I0930 11:57:40.489649       1 shared_informer.go:262] Caches are synced for crt configmap
	I0930 11:57:40.497540       1 shared_informer.go:262] Caches are synced for ephemeral
	I0930 11:57:40.500936       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0930 11:57:40.501165       1 shared_informer.go:262] Caches are synced for endpoint
	I0930 11:57:40.509308       1 shared_informer.go:262] Caches are synced for expand
	I0930 11:57:40.526104       1 shared_informer.go:262] Caches are synced for attach detach
	I0930 11:57:40.541563       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0930 11:57:40.574576       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0930 11:57:40.587694       1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0930 11:57:40.676829       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0930 11:57:40.677867       1 shared_informer.go:262] Caches are synced for resource quota
	I0930 11:57:40.679207       1 shared_informer.go:262] Caches are synced for resource quota
	I0930 11:57:40.704081       1 shared_informer.go:262] Caches are synced for job
	I0930 11:57:40.723853       1 shared_informer.go:262] Caches are synced for cronjob
	I0930 11:57:41.126793       1 shared_informer.go:262] Caches are synced for garbage collector
	I0930 11:57:41.126834       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0930 11:57:41.130622       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [c4532038eb636dcc8968bc826721c88e7c00d8f1bd1ac13dcd85ce3ef214809f] <==
	I0930 11:57:28.692101       1 node.go:163] Successfully retrieved node IP: 192.168.39.169
	I0930 11:57:28.692278       1 server_others.go:138] "Detected node IP" address="192.168.39.169"
	I0930 11:57:28.692367       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0930 11:57:28.729027       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0930 11:57:28.729110       1 server_others.go:206] "Using iptables Proxier"
	I0930 11:57:28.730037       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0930 11:57:28.730839       1 server.go:661] "Version info" version="v1.24.4"
	I0930 11:57:28.731355       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:57:28.734589       1 config.go:444] "Starting node config controller"
	I0930 11:57:28.734787       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0930 11:57:28.735218       1 config.go:317] "Starting service config controller"
	I0930 11:57:28.740573       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0930 11:57:28.740612       1 shared_informer.go:262] Caches are synced for service config
	I0930 11:57:28.735229       1 config.go:226] "Starting endpoint slice config controller"
	I0930 11:57:28.744076       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0930 11:57:28.744150       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0930 11:57:28.843209       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [a2876fb92c2f6e3e617fb70c783a00c8906cff5e43ebcff10b5258e9f0acd93f] <==
	I0930 11:57:24.123530       1 serving.go:348] Generated self-signed cert in-memory
	W0930 11:57:27.279806       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0930 11:57:27.280651       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0930 11:57:27.280759       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0930 11:57:27.280793       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0930 11:57:27.312871       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0930 11:57:27.312956       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 11:57:27.327997       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0930 11:57:27.330002       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 11:57:27.332410       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 11:57:27.334014       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0930 11:57:27.433253       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.444518    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5912de81-1463-429e-a5c9-9be973f341a4-lib-modules\") pod \"kube-proxy-2jtb6\" (UID: \"5912de81-1463-429e-a5c9-9be973f341a4\") " pod="kube-system/kube-proxy-2jtb6"
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.444809    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n5lh\" (UniqueName: \"kubernetes.io/projected/63737ae3-7dfb-494d-b1f0-3514305f7d8d-kube-api-access-2n5lh\") pod \"coredns-6d4b75cb6d-77fdr\" (UID: \"63737ae3-7dfb-494d-b1f0-3514305f7d8d\") " pod="kube-system/coredns-6d4b75cb6d-77fdr"
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.445085    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5912de81-1463-429e-a5c9-9be973f341a4-xtables-lock\") pod \"kube-proxy-2jtb6\" (UID: \"5912de81-1463-429e-a5c9-9be973f341a4\") " pod="kube-system/kube-proxy-2jtb6"
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.445383    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume\") pod \"coredns-6d4b75cb6d-77fdr\" (UID: \"63737ae3-7dfb-494d-b1f0-3514305f7d8d\") " pod="kube-system/coredns-6d4b75cb6d-77fdr"
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.445535    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5912de81-1463-429e-a5c9-9be973f341a4-kube-proxy\") pod \"kube-proxy-2jtb6\" (UID: \"5912de81-1463-429e-a5c9-9be973f341a4\") " pod="kube-system/kube-proxy-2jtb6"
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.445643    1138 reconciler.go:159] "Reconciler: start to sync state"
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.561192    1138 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4m5l\" (UniqueName: \"kubernetes.io/projected/d3ca5452-59e4-4b4a-b628-92952c07c82f-kube-api-access-v4m5l\") pod \"d3ca5452-59e4-4b4a-b628-92952c07c82f\" (UID: \"d3ca5452-59e4-4b4a-b628-92952c07c82f\") "
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.561333    1138 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3ca5452-59e4-4b4a-b628-92952c07c82f-config-volume\") pod \"d3ca5452-59e4-4b4a-b628-92952c07c82f\" (UID: \"d3ca5452-59e4-4b4a-b628-92952c07c82f\") "
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: E0930 11:57:27.563089    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: W0930 11:57:27.563218    1138 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d3ca5452-59e4-4b4a-b628-92952c07c82f/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: E0930 11:57:27.563305    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume podName:63737ae3-7dfb-494d-b1f0-3514305f7d8d nodeName:}" failed. No retries permitted until 2024-09-30 11:57:28.063197196 +0000 UTC m=+5.917420786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume") pod "coredns-6d4b75cb6d-77fdr" (UID: "63737ae3-7dfb-494d-b1f0-3514305f7d8d") : object "kube-system"/"coredns" not registered
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.564026    1138 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ca5452-59e4-4b4a-b628-92952c07c82f-config-volume" (OuterVolumeSpecName: "config-volume") pod "d3ca5452-59e4-4b4a-b628-92952c07c82f" (UID: "d3ca5452-59e4-4b4a-b628-92952c07c82f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: W0930 11:57:27.564169    1138 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d3ca5452-59e4-4b4a-b628-92952c07c82f/volumes/kubernetes.io~projected/kube-api-access-v4m5l: clearQuota called, but quotas disabled
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.564794    1138 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ca5452-59e4-4b4a-b628-92952c07c82f-kube-api-access-v4m5l" (OuterVolumeSpecName: "kube-api-access-v4m5l") pod "d3ca5452-59e4-4b4a-b628-92952c07c82f" (UID: "d3ca5452-59e4-4b4a-b628-92952c07c82f"). InnerVolumeSpecName "kube-api-access-v4m5l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.662203    1138 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3ca5452-59e4-4b4a-b628-92952c07c82f-config-volume\") on node \"test-preload-299114\" DevicePath \"\""
	Sep 30 11:57:27 test-preload-299114 kubelet[1138]: I0930 11:57:27.662359    1138 reconciler.go:384] "Volume detached for volume \"kube-api-access-v4m5l\" (UniqueName: \"kubernetes.io/projected/d3ca5452-59e4-4b4a-b628-92952c07c82f-kube-api-access-v4m5l\") on node \"test-preload-299114\" DevicePath \"\""
	Sep 30 11:57:28 test-preload-299114 kubelet[1138]: E0930 11:57:28.070773    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 11:57:28 test-preload-299114 kubelet[1138]: E0930 11:57:28.070856    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume podName:63737ae3-7dfb-494d-b1f0-3514305f7d8d nodeName:}" failed. No retries permitted until 2024-09-30 11:57:29.070840222 +0000 UTC m=+6.925063813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume") pod "coredns-6d4b75cb6d-77fdr" (UID: "63737ae3-7dfb-494d-b1f0-3514305f7d8d") : object "kube-system"/"coredns" not registered
	Sep 30 11:57:29 test-preload-299114 kubelet[1138]: E0930 11:57:29.077094    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 11:57:29 test-preload-299114 kubelet[1138]: E0930 11:57:29.077187    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume podName:63737ae3-7dfb-494d-b1f0-3514305f7d8d nodeName:}" failed. No retries permitted until 2024-09-30 11:57:31.077171927 +0000 UTC m=+8.931395511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume") pod "coredns-6d4b75cb6d-77fdr" (UID: "63737ae3-7dfb-494d-b1f0-3514305f7d8d") : object "kube-system"/"coredns" not registered
	Sep 30 11:57:29 test-preload-299114 kubelet[1138]: E0930 11:57:29.385791    1138 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-77fdr" podUID=63737ae3-7dfb-494d-b1f0-3514305f7d8d
	Sep 30 11:57:30 test-preload-299114 kubelet[1138]: I0930 11:57:30.390362    1138 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d3ca5452-59e4-4b4a-b628-92952c07c82f path="/var/lib/kubelet/pods/d3ca5452-59e4-4b4a-b628-92952c07c82f/volumes"
	Sep 30 11:57:31 test-preload-299114 kubelet[1138]: E0930 11:57:31.091906    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 30 11:57:31 test-preload-299114 kubelet[1138]: E0930 11:57:31.092019    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume podName:63737ae3-7dfb-494d-b1f0-3514305f7d8d nodeName:}" failed. No retries permitted until 2024-09-30 11:57:35.092003164 +0000 UTC m=+12.946226749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/63737ae3-7dfb-494d-b1f0-3514305f7d8d-config-volume") pod "coredns-6d4b75cb6d-77fdr" (UID: "63737ae3-7dfb-494d-b1f0-3514305f7d8d") : object "kube-system"/"coredns" not registered
	Sep 30 11:57:31 test-preload-299114 kubelet[1138]: E0930 11:57:31.385132    1138 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-77fdr" podUID=63737ae3-7dfb-494d-b1f0-3514305f7d8d
	
	
	==> storage-provisioner [79e057593b71e60c9cd369ace7659fda6067dbbd31b53678cb67dd75eeac1da4] <==
	I0930 11:57:28.122062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-299114 -n test-preload-299114
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-299114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-299114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-299114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-299114: (1.11911525s)
--- FAIL: TestPreload (166.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (412.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.054083643s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-001996] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-001996" primary control-plane node in "kubernetes-upgrade-001996" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 12:07:29.133268   59516 out.go:345] Setting OutFile to fd 1 ...
	I0930 12:07:29.133403   59516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 12:07:29.133412   59516 out.go:358] Setting ErrFile to fd 2...
	I0930 12:07:29.133417   59516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 12:07:29.133603   59516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 12:07:29.134239   59516 out.go:352] Setting JSON to false
	I0930 12:07:29.135190   59516 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6596,"bootTime":1727691453,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 12:07:29.135301   59516 start.go:139] virtualization: kvm guest
	I0930 12:07:29.137514   59516 out.go:177] * [kubernetes-upgrade-001996] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 12:07:29.138894   59516 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 12:07:29.138912   59516 notify.go:220] Checking for updates...
	I0930 12:07:29.141940   59516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 12:07:29.143327   59516 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 12:07:29.144850   59516 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 12:07:29.146188   59516 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 12:07:29.147521   59516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 12:07:29.149261   59516 config.go:182] Loaded profile config "embed-certs-499540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 12:07:29.149372   59516 config.go:182] Loaded profile config "no-preload-575582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 12:07:29.149450   59516 config.go:182] Loaded profile config "old-k8s-version-121479": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 12:07:29.149521   59516 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 12:07:29.187005   59516 out.go:177] * Using the kvm2 driver based on user configuration
	I0930 12:07:29.188524   59516 start.go:297] selected driver: kvm2
	I0930 12:07:29.188544   59516 start.go:901] validating driver "kvm2" against <nil>
	I0930 12:07:29.188558   59516 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 12:07:29.189374   59516 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 12:07:29.189462   59516 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 12:07:29.205792   59516 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 12:07:29.205855   59516 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 12:07:29.206085   59516 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 12:07:29.206109   59516 cni.go:84] Creating CNI manager for ""
	I0930 12:07:29.206153   59516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 12:07:29.206163   59516 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 12:07:29.206209   59516 start.go:340] cluster config:
	{Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 12:07:29.206301   59516 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 12:07:29.208506   59516 out.go:177] * Starting "kubernetes-upgrade-001996" primary control-plane node in "kubernetes-upgrade-001996" cluster
	I0930 12:07:29.209874   59516 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 12:07:29.209933   59516 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 12:07:29.209962   59516 cache.go:56] Caching tarball of preloaded images
	I0930 12:07:29.210053   59516 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 12:07:29.210068   59516 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 12:07:29.210166   59516 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/config.json ...
	I0930 12:07:29.210185   59516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/config.json: {Name:mk1c0cf0edd254faefc3e87f42a9a2e85afde235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:07:29.210378   59516 start.go:360] acquireMachinesLock for kubernetes-upgrade-001996: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 12:07:29.210414   59516 start.go:364] duration metric: took 18.323µs to acquireMachinesLock for "kubernetes-upgrade-001996"
	I0930 12:07:29.210437   59516 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0930 12:07:29.210509   59516 start.go:125] createHost starting for "" (driver="kvm2")
	I0930 12:07:29.213048   59516 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0930 12:07:29.213209   59516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 12:07:29.213252   59516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 12:07:29.228839   59516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36669
	I0930 12:07:29.229313   59516 main.go:141] libmachine: () Calling .GetVersion
	I0930 12:07:29.229925   59516 main.go:141] libmachine: Using API Version  1
	I0930 12:07:29.229949   59516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 12:07:29.230272   59516 main.go:141] libmachine: () Calling .GetMachineName
	I0930 12:07:29.230443   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetMachineName
	I0930 12:07:29.230577   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:29.230733   59516 start.go:159] libmachine.API.Create for "kubernetes-upgrade-001996" (driver="kvm2")
	I0930 12:07:29.230767   59516 client.go:168] LocalClient.Create starting
	I0930 12:07:29.230806   59516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem
	I0930 12:07:29.230848   59516 main.go:141] libmachine: Decoding PEM data...
	I0930 12:07:29.230879   59516 main.go:141] libmachine: Parsing certificate...
	I0930 12:07:29.230971   59516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem
	I0930 12:07:29.231005   59516 main.go:141] libmachine: Decoding PEM data...
	I0930 12:07:29.231021   59516 main.go:141] libmachine: Parsing certificate...
	I0930 12:07:29.231047   59516 main.go:141] libmachine: Running pre-create checks...
	I0930 12:07:29.231057   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .PreCreateCheck
	I0930 12:07:29.231394   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetConfigRaw
	I0930 12:07:29.231800   59516 main.go:141] libmachine: Creating machine...
	I0930 12:07:29.231813   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .Create
	I0930 12:07:29.231963   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Creating KVM machine...
	I0930 12:07:29.233245   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found existing default KVM network
	I0930 12:07:29.234426   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:29.234282   59555 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7d:62:8d} reservation:<nil>}
	I0930 12:07:29.235556   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:29.235475   59555 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123f90}
	I0930 12:07:29.235616   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | created network xml: 
	I0930 12:07:29.235637   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | <network>
	I0930 12:07:29.235647   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |   <name>mk-kubernetes-upgrade-001996</name>
	I0930 12:07:29.235660   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |   <dns enable='no'/>
	I0930 12:07:29.235671   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |   
	I0930 12:07:29.235684   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0930 12:07:29.235695   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |     <dhcp>
	I0930 12:07:29.235709   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0930 12:07:29.235731   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |     </dhcp>
	I0930 12:07:29.235771   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |   </ip>
	I0930 12:07:29.235789   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG |   
	I0930 12:07:29.235796   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | </network>
	I0930 12:07:29.235808   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | 
	I0930 12:07:29.241480   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | trying to create private KVM network mk-kubernetes-upgrade-001996 192.168.50.0/24...
	I0930 12:07:29.313437   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | private KVM network mk-kubernetes-upgrade-001996 192.168.50.0/24 created
	I0930 12:07:29.313464   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Setting up store path in /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996 ...
	I0930 12:07:29.313474   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:29.313418   59555 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 12:07:29.313487   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Building disk image from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 12:07:29.313562   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Downloading /home/jenkins/minikube-integration/19734-3842/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0930 12:07:29.553441   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:29.553338   59555 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa...
	I0930 12:07:29.700730   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:29.700587   59555 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/kubernetes-upgrade-001996.rawdisk...
	I0930 12:07:29.700805   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Writing magic tar header
	I0930 12:07:29.700825   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Writing SSH key tar header
	I0930 12:07:29.700840   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:29.700747   59555 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996 ...
	I0930 12:07:29.700936   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996
	I0930 12:07:29.700975   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube/machines
	I0930 12:07:29.700990   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996 (perms=drwx------)
	I0930 12:07:29.701014   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube/machines (perms=drwxr-xr-x)
	I0930 12:07:29.701028   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 12:07:29.701046   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19734-3842
	I0930 12:07:29.701066   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0930 12:07:29.701079   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842/.minikube (perms=drwxr-xr-x)
	I0930 12:07:29.701096   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Setting executable bit set on /home/jenkins/minikube-integration/19734-3842 (perms=drwxrwxr-x)
	I0930 12:07:29.701109   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0930 12:07:29.701119   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Checking permissions on dir: /home/jenkins
	I0930 12:07:29.701130   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Checking permissions on dir: /home
	I0930 12:07:29.701140   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0930 12:07:29.701154   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Creating domain...
	I0930 12:07:29.701165   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Skipping /home - not owner
	I0930 12:07:29.702314   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) define libvirt domain using xml: 
	I0930 12:07:29.702339   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) <domain type='kvm'>
	I0930 12:07:29.702351   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   <name>kubernetes-upgrade-001996</name>
	I0930 12:07:29.702359   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   <memory unit='MiB'>2200</memory>
	I0930 12:07:29.702389   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   <vcpu>2</vcpu>
	I0930 12:07:29.702401   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   <features>
	I0930 12:07:29.702409   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <acpi/>
	I0930 12:07:29.702415   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <apic/>
	I0930 12:07:29.702452   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <pae/>
	I0930 12:07:29.702477   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     
	I0930 12:07:29.702493   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   </features>
	I0930 12:07:29.702507   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   <cpu mode='host-passthrough'>
	I0930 12:07:29.702517   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   
	I0930 12:07:29.702524   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   </cpu>
	I0930 12:07:29.702539   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   <os>
	I0930 12:07:29.702549   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <type>hvm</type>
	I0930 12:07:29.702557   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <boot dev='cdrom'/>
	I0930 12:07:29.702566   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <boot dev='hd'/>
	I0930 12:07:29.702575   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <bootmenu enable='no'/>
	I0930 12:07:29.702589   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   </os>
	I0930 12:07:29.702600   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   <devices>
	I0930 12:07:29.702612   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <disk type='file' device='cdrom'>
	I0930 12:07:29.702632   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/boot2docker.iso'/>
	I0930 12:07:29.702644   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <target dev='hdc' bus='scsi'/>
	I0930 12:07:29.702655   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <readonly/>
	I0930 12:07:29.702668   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     </disk>
	I0930 12:07:29.702683   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <disk type='file' device='disk'>
	I0930 12:07:29.702695   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0930 12:07:29.702713   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <source file='/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/kubernetes-upgrade-001996.rawdisk'/>
	I0930 12:07:29.702723   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <target dev='hda' bus='virtio'/>
	I0930 12:07:29.702733   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     </disk>
	I0930 12:07:29.702747   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <interface type='network'>
	I0930 12:07:29.702760   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <source network='mk-kubernetes-upgrade-001996'/>
	I0930 12:07:29.702770   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <model type='virtio'/>
	I0930 12:07:29.702780   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     </interface>
	I0930 12:07:29.702791   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <interface type='network'>
	I0930 12:07:29.702800   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <source network='default'/>
	I0930 12:07:29.702810   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <model type='virtio'/>
	I0930 12:07:29.702830   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     </interface>
	I0930 12:07:29.702848   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <serial type='pty'>
	I0930 12:07:29.702859   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <target port='0'/>
	I0930 12:07:29.702874   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     </serial>
	I0930 12:07:29.702887   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <console type='pty'>
	I0930 12:07:29.702898   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <target type='serial' port='0'/>
	I0930 12:07:29.702906   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     </console>
	I0930 12:07:29.702917   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     <rng model='virtio'>
	I0930 12:07:29.702927   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)       <backend model='random'>/dev/random</backend>
	I0930 12:07:29.702935   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     </rng>
	I0930 12:07:29.702941   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     
	I0930 12:07:29.702954   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)     
	I0930 12:07:29.702976   59516 main.go:141] libmachine: (kubernetes-upgrade-001996)   </devices>
	I0930 12:07:29.702986   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) </domain>
	I0930 12:07:29.702996   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) 
	I0930 12:07:29.707242   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:a6:23:83 in network default
	I0930 12:07:29.707791   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Ensuring networks are active...
	I0930 12:07:29.707807   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:29.708586   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Ensuring network default is active
	I0930 12:07:29.708836   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Ensuring network mk-kubernetes-upgrade-001996 is active
	I0930 12:07:29.709300   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Getting domain xml...
	I0930 12:07:29.710006   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Creating domain...
	I0930 12:07:30.927751   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Waiting to get IP...
	I0930 12:07:30.928596   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:30.928951   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:30.928971   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:30.928928   59555 retry.go:31] will retry after 278.009495ms: waiting for machine to come up
	I0930 12:07:31.208647   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:31.209209   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:31.209232   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:31.209167   59555 retry.go:31] will retry after 327.356321ms: waiting for machine to come up
	I0930 12:07:31.537641   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:31.538257   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:31.538280   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:31.538223   59555 retry.go:31] will retry after 381.717271ms: waiting for machine to come up
	I0930 12:07:31.921965   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:31.922543   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:31.922571   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:31.922485   59555 retry.go:31] will retry after 519.100378ms: waiting for machine to come up
	I0930 12:07:32.442900   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:32.443400   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:32.443426   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:32.443350   59555 retry.go:31] will retry after 513.819746ms: waiting for machine to come up
	I0930 12:07:32.959049   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:32.959662   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:32.959697   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:32.959578   59555 retry.go:31] will retry after 681.831496ms: waiting for machine to come up
	I0930 12:07:33.643580   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:33.644124   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:33.644154   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:33.644066   59555 retry.go:31] will retry after 956.147243ms: waiting for machine to come up
	I0930 12:07:34.601731   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:34.602268   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:34.602298   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:34.602215   59555 retry.go:31] will retry after 1.158123867s: waiting for machine to come up
	I0930 12:07:35.762035   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:35.762480   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:35.762506   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:35.762432   59555 retry.go:31] will retry after 1.619479791s: waiting for machine to come up
	I0930 12:07:37.384380   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:37.384863   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:37.384892   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:37.384831   59555 retry.go:31] will retry after 1.864178239s: waiting for machine to come up
	I0930 12:07:39.250509   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:39.250987   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:39.251021   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:39.250930   59555 retry.go:31] will retry after 2.124099918s: waiting for machine to come up
	I0930 12:07:41.378318   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:41.378842   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:41.378871   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:41.378795   59555 retry.go:31] will retry after 2.441308996s: waiting for machine to come up
	I0930 12:07:43.821495   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:43.822171   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:43.822195   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:43.822119   59555 retry.go:31] will retry after 3.501810503s: waiting for machine to come up
	I0930 12:07:47.326596   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:47.327002   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find current IP address of domain kubernetes-upgrade-001996 in network mk-kubernetes-upgrade-001996
	I0930 12:07:47.327054   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | I0930 12:07:47.326950   59555 retry.go:31] will retry after 4.395698469s: waiting for machine to come up
	I0930 12:07:51.723897   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:51.724340   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has current primary IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:51.724359   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Found IP for machine: 192.168.50.128
	I0930 12:07:51.724373   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Reserving static IP address...
	I0930 12:07:51.724652   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-001996", mac: "52:54:00:68:28:6c", ip: "192.168.50.128"} in network mk-kubernetes-upgrade-001996
	I0930 12:07:51.798758   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Getting to WaitForSSH function...
	I0930 12:07:51.798793   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Reserved static IP address: 192.168.50.128
	I0930 12:07:51.798807   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Waiting for SSH to be available...
	I0930 12:07:51.801255   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:51.801662   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:51.801701   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:51.801886   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Using SSH client type: external
	I0930 12:07:51.801913   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa (-rw-------)
	I0930 12:07:51.801952   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 12:07:51.801971   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | About to run SSH command:
	I0930 12:07:51.801986   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | exit 0
	I0930 12:07:51.925844   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | SSH cmd err, output: <nil>: 
	I0930 12:07:51.926175   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) KVM machine creation complete!
	I0930 12:07:51.926441   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetConfigRaw
	I0930 12:07:51.926976   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:51.927186   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:51.927358   59516 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0930 12:07:51.927375   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetState
	I0930 12:07:51.928717   59516 main.go:141] libmachine: Detecting operating system of created instance...
	I0930 12:07:51.928729   59516 main.go:141] libmachine: Waiting for SSH to be available...
	I0930 12:07:51.928735   59516 main.go:141] libmachine: Getting to WaitForSSH function...
	I0930 12:07:51.928740   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:51.931438   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:51.931850   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:51.931906   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:51.932051   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:51.932221   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:51.932375   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:51.932514   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:51.932630   59516 main.go:141] libmachine: Using SSH client type: native
	I0930 12:07:51.932832   59516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:07:51.932848   59516 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0930 12:07:52.029221   59516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 12:07:52.029240   59516 main.go:141] libmachine: Detecting the provisioner...
	I0930 12:07:52.029247   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:52.032008   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.032418   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.032448   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.032567   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:52.032748   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.032928   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.033049   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:52.033175   59516 main.go:141] libmachine: Using SSH client type: native
	I0930 12:07:52.033383   59516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:07:52.033397   59516 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0930 12:07:52.134568   59516 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0930 12:07:52.134657   59516 main.go:141] libmachine: found compatible host: buildroot
	I0930 12:07:52.134672   59516 main.go:141] libmachine: Provisioning with buildroot...
	I0930 12:07:52.134684   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetMachineName
	I0930 12:07:52.134911   59516 buildroot.go:166] provisioning hostname "kubernetes-upgrade-001996"
	I0930 12:07:52.134936   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetMachineName
	I0930 12:07:52.135109   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:52.138024   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.138341   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.138368   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.138496   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:52.138721   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.138886   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.139016   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:52.139189   59516 main.go:141] libmachine: Using SSH client type: native
	I0930 12:07:52.139354   59516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:07:52.139368   59516 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-001996 && echo "kubernetes-upgrade-001996" | sudo tee /etc/hostname
	I0930 12:07:52.252270   59516 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-001996
	
	I0930 12:07:52.252322   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:52.255116   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.255531   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.255554   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.255747   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:52.255935   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.256106   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.256243   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:52.256395   59516 main.go:141] libmachine: Using SSH client type: native
	I0930 12:07:52.256563   59516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:07:52.256578   59516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-001996' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-001996/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-001996' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 12:07:52.363063   59516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 12:07:52.363092   59516 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 12:07:52.363117   59516 buildroot.go:174] setting up certificates
	I0930 12:07:52.363129   59516 provision.go:84] configureAuth start
	I0930 12:07:52.363139   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetMachineName
	I0930 12:07:52.363407   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetIP
	I0930 12:07:52.366052   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.366439   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.366459   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.366622   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:52.368857   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.369223   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.369250   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.369431   59516 provision.go:143] copyHostCerts
	I0930 12:07:52.369486   59516 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 12:07:52.369503   59516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 12:07:52.369586   59516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 12:07:52.369738   59516 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 12:07:52.369750   59516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 12:07:52.369789   59516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 12:07:52.369883   59516 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 12:07:52.369893   59516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 12:07:52.369929   59516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 12:07:52.370011   59516 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-001996 san=[127.0.0.1 192.168.50.128 kubernetes-upgrade-001996 localhost minikube]
	I0930 12:07:52.525471   59516 provision.go:177] copyRemoteCerts
	I0930 12:07:52.525548   59516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 12:07:52.525602   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:52.528593   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.528957   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.528983   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.529199   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:52.529421   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.529574   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:52.529716   59516 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:07:52.608408   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 12:07:52.632704   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0930 12:07:52.656834   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 12:07:52.683444   59516 provision.go:87] duration metric: took 320.301421ms to configureAuth
	I0930 12:07:52.683472   59516 buildroot.go:189] setting minikube options for container-runtime
	I0930 12:07:52.683625   59516 config.go:182] Loaded profile config "kubernetes-upgrade-001996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0930 12:07:52.683723   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:52.686619   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.687042   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.687074   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.687264   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:52.687495   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.687628   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.687764   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:52.687915   59516 main.go:141] libmachine: Using SSH client type: native
	I0930 12:07:52.688115   59516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:07:52.688136   59516 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 12:07:52.901368   59516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 12:07:52.901391   59516 main.go:141] libmachine: Checking connection to Docker...
	I0930 12:07:52.901401   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetURL
	I0930 12:07:52.902698   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | Using libvirt version 6000000
	I0930 12:07:52.904976   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.905315   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.905356   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.905494   59516 main.go:141] libmachine: Docker is up and running!
	I0930 12:07:52.905508   59516 main.go:141] libmachine: Reticulating splines...
	I0930 12:07:52.905515   59516 client.go:171] duration metric: took 23.674737469s to LocalClient.Create
	I0930 12:07:52.905539   59516 start.go:167] duration metric: took 23.674809922s to libmachine.API.Create "kubernetes-upgrade-001996"
	I0930 12:07:52.905549   59516 start.go:293] postStartSetup for "kubernetes-upgrade-001996" (driver="kvm2")
	I0930 12:07:52.905558   59516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 12:07:52.905575   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:52.905800   59516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 12:07:52.905824   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:52.907779   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.908057   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:52.908092   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:52.908185   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:52.908391   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:52.908554   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:52.908726   59516 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:07:52.990067   59516 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 12:07:52.994534   59516 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 12:07:52.994557   59516 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 12:07:52.994621   59516 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 12:07:52.994719   59516 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 12:07:52.994840   59516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 12:07:53.005053   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 12:07:53.030541   59516 start.go:296] duration metric: took 124.980586ms for postStartSetup
	I0930 12:07:53.030586   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetConfigRaw
	I0930 12:07:53.031275   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetIP
	I0930 12:07:53.034122   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.034506   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:53.034537   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.034717   59516 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/config.json ...
	I0930 12:07:53.034906   59516 start.go:128] duration metric: took 23.824387942s to createHost
	I0930 12:07:53.034927   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:53.037265   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.037757   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:53.037781   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.037914   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:53.038075   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:53.038272   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:53.038422   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:53.038614   59516 main.go:141] libmachine: Using SSH client type: native
	I0930 12:07:53.038836   59516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:07:53.038848   59516 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 12:07:53.138769   59516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727698073.112282461
	
	I0930 12:07:53.138790   59516 fix.go:216] guest clock: 1727698073.112282461
	I0930 12:07:53.138797   59516 fix.go:229] Guest: 2024-09-30 12:07:53.112282461 +0000 UTC Remote: 2024-09-30 12:07:53.034917559 +0000 UTC m=+23.937695749 (delta=77.364902ms)
	I0930 12:07:53.138821   59516 fix.go:200] guest clock delta is within tolerance: 77.364902ms
	I0930 12:07:53.138828   59516 start.go:83] releasing machines lock for "kubernetes-upgrade-001996", held for 23.928401936s
	I0930 12:07:53.138852   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:53.139183   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetIP
	I0930 12:07:53.142004   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.142385   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:53.142427   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.142547   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:53.143046   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:53.143216   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:07:53.143312   59516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 12:07:53.143356   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:53.143442   59516 ssh_runner.go:195] Run: cat /version.json
	I0930 12:07:53.143459   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:07:53.145871   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.146067   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.146241   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:53.146274   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.146406   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:53.146428   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:53.146432   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:53.146552   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:07:53.146633   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:53.146759   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:53.146800   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:07:53.146896   59516 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:07:53.146918   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:07:53.147040   59516 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:07:53.249164   59516 ssh_runner.go:195] Run: systemctl --version
	I0930 12:07:53.255478   59516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 12:07:53.419789   59516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 12:07:53.426841   59516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 12:07:53.426919   59516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 12:07:53.443915   59516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 12:07:53.443938   59516 start.go:495] detecting cgroup driver to use...
	I0930 12:07:53.443995   59516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 12:07:53.460134   59516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 12:07:53.475388   59516 docker.go:217] disabling cri-docker service (if available) ...
	I0930 12:07:53.475460   59516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 12:07:53.490181   59516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 12:07:53.505085   59516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 12:07:53.618311   59516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 12:07:53.755200   59516 docker.go:233] disabling docker service ...
	I0930 12:07:53.755294   59516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 12:07:53.770306   59516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 12:07:53.783551   59516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 12:07:53.931885   59516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 12:07:54.064626   59516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 12:07:54.086314   59516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 12:07:54.106596   59516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0930 12:07:54.106660   59516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:07:54.120033   59516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 12:07:54.120107   59516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:07:54.132443   59516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:07:54.146716   59516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:07:54.157891   59516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 12:07:54.169456   59516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 12:07:54.179438   59516 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 12:07:54.179499   59516 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 12:07:54.194014   59516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 12:07:54.204265   59516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 12:07:54.326136   59516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 12:07:54.429823   59516 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 12:07:54.429892   59516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 12:07:54.434758   59516 start.go:563] Will wait 60s for crictl version
	I0930 12:07:54.434826   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:54.438646   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 12:07:54.483641   59516 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 12:07:54.483727   59516 ssh_runner.go:195] Run: crio --version
	I0930 12:07:54.519447   59516 ssh_runner.go:195] Run: crio --version
	I0930 12:07:54.549739   59516 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0930 12:07:54.550972   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetIP
	I0930 12:07:54.553859   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:54.554228   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:07:44 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:07:54.554261   59516 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:07:54.554436   59516 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 12:07:54.558574   59516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 12:07:54.571451   59516 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 12:07:54.571557   59516 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 12:07:54.571599   59516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 12:07:54.611793   59516 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 12:07:54.611881   59516 ssh_runner.go:195] Run: which lz4
	I0930 12:07:54.615969   59516 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 12:07:54.620519   59516 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 12:07:54.620554   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0930 12:07:56.299906   59516 crio.go:462] duration metric: took 1.683977227s to copy over tarball
	I0930 12:07:56.299979   59516 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 12:07:58.944105   59516 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64410328s)
	I0930 12:07:58.944132   59516 crio.go:469] duration metric: took 2.644195941s to extract the tarball
	I0930 12:07:58.944145   59516 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 12:07:58.987899   59516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 12:07:59.036978   59516 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0930 12:07:59.037006   59516 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0930 12:07:59.037075   59516 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 12:07:59.037093   59516 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 12:07:59.037111   59516 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0930 12:07:59.037126   59516 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 12:07:59.037139   59516 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 12:07:59.037076   59516 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 12:07:59.037158   59516 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0930 12:07:59.037166   59516 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0930 12:07:59.038494   59516 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 12:07:59.038504   59516 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 12:07:59.038504   59516 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0930 12:07:59.038501   59516 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 12:07:59.038492   59516 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0930 12:07:59.038502   59516 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 12:07:59.038531   59516 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 12:07:59.038530   59516 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0930 12:07:59.213023   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0930 12:07:59.215463   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0930 12:07:59.224437   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0930 12:07:59.241696   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 12:07:59.244519   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0930 12:07:59.266252   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0930 12:07:59.274131   59516 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0930 12:07:59.274184   59516 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0930 12:07:59.274235   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:59.279748   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0930 12:07:59.344558   59516 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0930 12:07:59.344604   59516 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0930 12:07:59.344645   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:59.391373   59516 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0930 12:07:59.391429   59516 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0930 12:07:59.391481   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:59.402786   59516 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0930 12:07:59.402818   59516 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0930 12:07:59.402850   59516 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0930 12:07:59.402883   59516 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0930 12:07:59.402897   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:59.402824   59516 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 12:07:59.402913   59516 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0930 12:07:59.402923   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 12:07:59.402941   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:59.402955   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:59.417579   59516 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0930 12:07:59.417633   59516 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0930 12:07:59.417672   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 12:07:59.417698   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 12:07:59.417674   59516 ssh_runner.go:195] Run: which crictl
	I0930 12:07:59.417768   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 12:07:59.417813   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 12:07:59.425984   59516 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 12:07:59.492136   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 12:07:59.492264   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 12:07:59.536676   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 12:07:59.570370   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 12:07:59.572505   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 12:07:59.572548   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 12:07:59.572510   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 12:07:59.740452   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 12:07:59.740475   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0930 12:07:59.740510   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0930 12:07:59.740559   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0930 12:07:59.740586   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0930 12:07:59.769012   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 12:07:59.769288   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0930 12:07:59.883536   59516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0930 12:07:59.902883   59516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0930 12:07:59.902932   59516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0930 12:07:59.902989   59516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0930 12:07:59.903026   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0930 12:07:59.916658   59516 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0930 12:07:59.919118   59516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0930 12:07:59.963286   59516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0930 12:07:59.963371   59516 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0930 12:07:59.963418   59516 cache_images.go:92] duration metric: took 926.39994ms to LoadCachedImages
	W0930 12:07:59.963499   59516 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19734-3842/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0930 12:07:59.963514   59516 kubeadm.go:934] updating node { 192.168.50.128 8443 v1.20.0 crio true true} ...
	I0930 12:07:59.963641   59516 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-001996 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 12:07:59.963745   59516 ssh_runner.go:195] Run: crio config
	I0930 12:08:00.012427   59516 cni.go:84] Creating CNI manager for ""
	I0930 12:08:00.012453   59516 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 12:08:00.012470   59516 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 12:08:00.012488   59516 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-001996 NodeName:kubernetes-upgrade-001996 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0930 12:08:00.012623   59516 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-001996"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 12:08:00.012696   59516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0930 12:08:00.024104   59516 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 12:08:00.024183   59516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 12:08:00.035173   59516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0930 12:08:00.053604   59516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 12:08:00.072782   59516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0930 12:08:00.091086   59516 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0930 12:08:00.095383   59516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 12:08:00.108750   59516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 12:08:00.237378   59516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 12:08:00.256566   59516 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996 for IP: 192.168.50.128
	I0930 12:08:00.256592   59516 certs.go:194] generating shared ca certs ...
	I0930 12:08:00.256611   59516 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:08:00.256812   59516 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 12:08:00.256877   59516 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 12:08:00.256893   59516 certs.go:256] generating profile certs ...
	I0930 12:08:00.256979   59516 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/client.key
	I0930 12:08:00.257009   59516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/client.crt with IP's: []
	I0930 12:08:00.497759   59516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/client.crt ...
	I0930 12:08:00.497798   59516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/client.crt: {Name:mk4b388c78d285ddb61ca1f76c43684252f1aa0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:08:00.497982   59516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/client.key ...
	I0930 12:08:00.497999   59516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/client.key: {Name:mk80ab327105ea5ec67865d684810c3522335190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:08:00.498088   59516 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key.d740bd78
	I0930 12:08:00.498109   59516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.crt.d740bd78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.128]
	I0930 12:08:00.928238   59516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.crt.d740bd78 ...
	I0930 12:08:00.928271   59516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.crt.d740bd78: {Name:mk922b7bd4c67794c8c4e601e1552974cc18e1b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:08:00.928454   59516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key.d740bd78 ...
	I0930 12:08:00.928469   59516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key.d740bd78: {Name:mkf9eeef89801cda342a661aeb91ebd3a8b0e78c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:08:00.928538   59516 certs.go:381] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.crt.d740bd78 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.crt
	I0930 12:08:00.928610   59516 certs.go:385] copying /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key.d740bd78 -> /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key
	I0930 12:08:00.928662   59516 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.key
	I0930 12:08:00.928677   59516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.crt with IP's: []
	I0930 12:08:01.024542   59516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.crt ...
	I0930 12:08:01.024575   59516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.crt: {Name:mkb812891a24556a9e5a48a298c5f6c762b6d1ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:08:01.024727   59516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.key ...
	I0930 12:08:01.024740   59516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.key: {Name:mk7498a3310e98797cccd6dac49c753b8d6ed01b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:08:01.024901   59516 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 12:08:01.024939   59516 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 12:08:01.024947   59516 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 12:08:01.024967   59516 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 12:08:01.024986   59516 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 12:08:01.025007   59516 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 12:08:01.025043   59516 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 12:08:01.025597   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 12:08:01.053141   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 12:08:01.078872   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 12:08:01.107563   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 12:08:01.133144   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 12:08:01.161542   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 12:08:01.186801   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 12:08:01.212611   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 12:08:01.238598   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 12:08:01.263579   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 12:08:01.289435   59516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 12:08:01.315649   59516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 12:08:01.333974   59516 ssh_runner.go:195] Run: openssl version
	I0930 12:08:01.340128   59516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 12:08:01.351657   59516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 12:08:01.356637   59516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 12:08:01.356695   59516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 12:08:01.362724   59516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 12:08:01.373939   59516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 12:08:01.385469   59516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 12:08:01.390616   59516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 12:08:01.390663   59516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 12:08:01.396472   59516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 12:08:01.407855   59516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 12:08:01.419554   59516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:08:01.424822   59516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:08:01.424893   59516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:08:01.431210   59516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 12:08:01.443049   59516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 12:08:01.447628   59516 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 12:08:01.447680   59516 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 12:08:01.447760   59516 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 12:08:01.447814   59516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 12:08:01.493390   59516 cri.go:89] found id: ""
	I0930 12:08:01.493464   59516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 12:08:01.504467   59516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 12:08:01.515032   59516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 12:08:01.525522   59516 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 12:08:01.525539   59516 kubeadm.go:157] found existing configuration files:
	
	I0930 12:08:01.525580   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 12:08:01.535279   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 12:08:01.535353   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 12:08:01.545669   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 12:08:01.555908   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 12:08:01.555972   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 12:08:01.565965   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 12:08:01.575707   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 12:08:01.575778   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 12:08:01.586681   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 12:08:01.596696   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 12:08:01.596756   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 12:08:01.607692   59516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 12:08:01.853472   59516 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 12:09:59.619327   59516 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 12:09:59.619454   59516 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 12:09:59.620912   59516 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 12:09:59.620957   59516 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 12:09:59.621038   59516 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 12:09:59.621156   59516 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 12:09:59.621294   59516 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 12:09:59.621377   59516 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 12:09:59.622980   59516 out.go:235]   - Generating certificates and keys ...
	I0930 12:09:59.623076   59516 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 12:09:59.623146   59516 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 12:09:59.623215   59516 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 12:09:59.623262   59516 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 12:09:59.623315   59516 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 12:09:59.623355   59516 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 12:09:59.623405   59516 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 12:09:59.623555   59516 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-001996 localhost] and IPs [192.168.50.128 127.0.0.1 ::1]
	I0930 12:09:59.623635   59516 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 12:09:59.623807   59516 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-001996 localhost] and IPs [192.168.50.128 127.0.0.1 ::1]
	I0930 12:09:59.623897   59516 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 12:09:59.623949   59516 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 12:09:59.624001   59516 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 12:09:59.624098   59516 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 12:09:59.624178   59516 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 12:09:59.624279   59516 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 12:09:59.624370   59516 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 12:09:59.624455   59516 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 12:09:59.624604   59516 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 12:09:59.624728   59516 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 12:09:59.624784   59516 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 12:09:59.624876   59516 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 12:09:59.626302   59516 out.go:235]   - Booting up control plane ...
	I0930 12:09:59.626399   59516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 12:09:59.626494   59516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 12:09:59.626585   59516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 12:09:59.626689   59516 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 12:09:59.626885   59516 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 12:09:59.626937   59516 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 12:09:59.626999   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:09:59.627159   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:09:59.627236   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:09:59.627419   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:09:59.627487   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:09:59.627657   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:09:59.627720   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:09:59.627901   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:09:59.627972   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:09:59.628115   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:09:59.628123   59516 kubeadm.go:310] 
	I0930 12:09:59.628157   59516 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 12:09:59.628190   59516 kubeadm.go:310] 		timed out waiting for the condition
	I0930 12:09:59.628196   59516 kubeadm.go:310] 
	I0930 12:09:59.628230   59516 kubeadm.go:310] 	This error is likely caused by:
	I0930 12:09:59.628261   59516 kubeadm.go:310] 		- The kubelet is not running
	I0930 12:09:59.628353   59516 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 12:09:59.628360   59516 kubeadm.go:310] 
	I0930 12:09:59.628468   59516 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 12:09:59.628529   59516 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 12:09:59.628559   59516 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 12:09:59.628565   59516 kubeadm.go:310] 
	I0930 12:09:59.628659   59516 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 12:09:59.628731   59516 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 12:09:59.628738   59516 kubeadm.go:310] 
	I0930 12:09:59.628826   59516 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 12:09:59.628899   59516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 12:09:59.628964   59516 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 12:09:59.629037   59516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 12:09:59.629151   59516 kubeadm.go:310] 
	W0930 12:09:59.629182   59516 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-001996 localhost] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-001996 localhost] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-001996 localhost] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-001996 localhost] and IPs [192.168.50.128 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0930 12:09:59.629219   59516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0930 12:10:00.091838   59516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 12:10:00.112365   59516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 12:10:00.126968   59516 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 12:10:00.127005   59516 kubeadm.go:157] found existing configuration files:
	
	I0930 12:10:00.127048   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 12:10:00.140553   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 12:10:00.140612   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 12:10:00.151772   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 12:10:00.162176   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 12:10:00.162229   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 12:10:00.172773   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 12:10:00.182697   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 12:10:00.182751   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 12:10:00.196719   59516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 12:10:00.211027   59516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 12:10:00.211084   59516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 12:10:00.222788   59516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 12:10:00.452046   59516 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 12:11:56.516404   59516 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0930 12:11:56.516519   59516 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0930 12:11:56.518054   59516 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0930 12:11:56.518122   59516 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 12:11:56.518191   59516 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 12:11:56.518306   59516 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 12:11:56.518424   59516 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0930 12:11:56.518542   59516 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 12:11:56.520316   59516 out.go:235]   - Generating certificates and keys ...
	I0930 12:11:56.520405   59516 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 12:11:56.520483   59516 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 12:11:56.520578   59516 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0930 12:11:56.520670   59516 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0930 12:11:56.520787   59516 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0930 12:11:56.520862   59516 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0930 12:11:56.520915   59516 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0930 12:11:56.520968   59516 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0930 12:11:56.521028   59516 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0930 12:11:56.521095   59516 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0930 12:11:56.521126   59516 kubeadm.go:310] [certs] Using the existing "sa" key
	I0930 12:11:56.521198   59516 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 12:11:56.521250   59516 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 12:11:56.521298   59516 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 12:11:56.521355   59516 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 12:11:56.521406   59516 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 12:11:56.521515   59516 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 12:11:56.521672   59516 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 12:11:56.521735   59516 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 12:11:56.521795   59516 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 12:11:56.523243   59516 out.go:235]   - Booting up control plane ...
	I0930 12:11:56.523322   59516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 12:11:56.523399   59516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 12:11:56.523468   59516 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 12:11:56.523560   59516 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 12:11:56.523795   59516 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0930 12:11:56.523849   59516 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0930 12:11:56.523914   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:11:56.524088   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:11:56.524189   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:11:56.524365   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:11:56.524427   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:11:56.524618   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:11:56.524678   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:11:56.524834   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:11:56.524892   59516 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0930 12:11:56.525041   59516 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0930 12:11:56.525048   59516 kubeadm.go:310] 
	I0930 12:11:56.525083   59516 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0930 12:11:56.525117   59516 kubeadm.go:310] 		timed out waiting for the condition
	I0930 12:11:56.525123   59516 kubeadm.go:310] 
	I0930 12:11:56.525151   59516 kubeadm.go:310] 	This error is likely caused by:
	I0930 12:11:56.525196   59516 kubeadm.go:310] 		- The kubelet is not running
	I0930 12:11:56.525306   59516 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0930 12:11:56.525314   59516 kubeadm.go:310] 
	I0930 12:11:56.525425   59516 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0930 12:11:56.525455   59516 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0930 12:11:56.525497   59516 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0930 12:11:56.525512   59516 kubeadm.go:310] 
	I0930 12:11:56.525691   59516 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0930 12:11:56.525819   59516 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0930 12:11:56.525830   59516 kubeadm.go:310] 
	I0930 12:11:56.525942   59516 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0930 12:11:56.526025   59516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0930 12:11:56.526125   59516 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0930 12:11:56.526244   59516 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0930 12:11:56.526288   59516 kubeadm.go:310] 
	I0930 12:11:56.526312   59516 kubeadm.go:394] duration metric: took 3m55.078635029s to StartCluster
	I0930 12:11:56.526354   59516 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0930 12:11:56.526406   59516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0930 12:11:56.571480   59516 cri.go:89] found id: ""
	I0930 12:11:56.571507   59516 logs.go:276] 0 containers: []
	W0930 12:11:56.571517   59516 logs.go:278] No container was found matching "kube-apiserver"
	I0930 12:11:56.571523   59516 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0930 12:11:56.571595   59516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0930 12:11:56.609310   59516 cri.go:89] found id: ""
	I0930 12:11:56.609343   59516 logs.go:276] 0 containers: []
	W0930 12:11:56.609354   59516 logs.go:278] No container was found matching "etcd"
	I0930 12:11:56.609367   59516 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0930 12:11:56.609416   59516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0930 12:11:56.644701   59516 cri.go:89] found id: ""
	I0930 12:11:56.644727   59516 logs.go:276] 0 containers: []
	W0930 12:11:56.644735   59516 logs.go:278] No container was found matching "coredns"
	I0930 12:11:56.644740   59516 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0930 12:11:56.644786   59516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0930 12:11:56.679710   59516 cri.go:89] found id: ""
	I0930 12:11:56.679736   59516 logs.go:276] 0 containers: []
	W0930 12:11:56.679743   59516 logs.go:278] No container was found matching "kube-scheduler"
	I0930 12:11:56.679749   59516 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0930 12:11:56.679795   59516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0930 12:11:56.714606   59516 cri.go:89] found id: ""
	I0930 12:11:56.714635   59516 logs.go:276] 0 containers: []
	W0930 12:11:56.714644   59516 logs.go:278] No container was found matching "kube-proxy"
	I0930 12:11:56.714652   59516 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0930 12:11:56.714712   59516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0930 12:11:56.753545   59516 cri.go:89] found id: ""
	I0930 12:11:56.753579   59516 logs.go:276] 0 containers: []
	W0930 12:11:56.753590   59516 logs.go:278] No container was found matching "kube-controller-manager"
	I0930 12:11:56.753597   59516 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0930 12:11:56.753670   59516 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0930 12:11:56.806073   59516 cri.go:89] found id: ""
	I0930 12:11:56.806100   59516 logs.go:276] 0 containers: []
	W0930 12:11:56.806109   59516 logs.go:278] No container was found matching "kindnet"
	I0930 12:11:56.806119   59516 logs.go:123] Gathering logs for container status ...
	I0930 12:11:56.806133   59516 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0930 12:11:56.846033   59516 logs.go:123] Gathering logs for kubelet ...
	I0930 12:11:56.846064   59516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0930 12:11:56.899332   59516 logs.go:123] Gathering logs for dmesg ...
	I0930 12:11:56.899374   59516 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0930 12:11:56.914367   59516 logs.go:123] Gathering logs for describe nodes ...
	I0930 12:11:56.914396   59516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0930 12:11:57.032861   59516 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0930 12:11:57.032892   59516 logs.go:123] Gathering logs for CRI-O ...
	I0930 12:11:57.032908   59516 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0930 12:11:57.136333   59516 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0930 12:11:57.136395   59516 out.go:270] * 
	* 
	W0930 12:11:57.136447   59516 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 12:11:57.136460   59516 out.go:270] * 
	* 
	W0930 12:11:57.137299   59516 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0930 12:11:57.140170   59516 out.go:201] 
	W0930 12:11:57.141322   59516 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0930 12:11:57.141360   59516 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0930 12:11:57.141382   59516 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0930 12:11:57.142856   59516 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-001996
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-001996: (1.328540408s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-001996 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-001996 status --format={{.Host}}: exit status 7 (61.852266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m49.919138252s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-001996 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.245356ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-001996] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-001996
	    minikube start -p kubernetes-upgrade-001996 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0019962 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-001996 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-001996 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (29.051024291s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-30 12:14:17.699604445 +0000 UTC m=+6833.269042180
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-001996 -n kubernetes-upgrade-001996
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-001996 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-001996 logs -n 25: (1.747979556s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-575582                                   | no-preload-575582            | jenkins | v1.34.0 | 30 Sep 24 12:03 UTC | 30 Sep 24 12:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-121479        | old-k8s-version-121479       | jenkins | v1.34.0 | 30 Sep 24 12:04 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| ssh     | cert-options-505847 ssh                                | cert-options-505847          | jenkins | v1.34.0 | 30 Sep 24 12:04 UTC | 30 Sep 24 12:04 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-505847 -- sudo                         | cert-options-505847          | jenkins | v1.34.0 | 30 Sep 24 12:04 UTC | 30 Sep 24 12:04 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-505847                                 | cert-options-505847          | jenkins | v1.34.0 | 30 Sep 24 12:04 UTC | 30 Sep 24 12:04 UTC |
	| start   | -p embed-certs-499540                                  | embed-certs-499540           | jenkins | v1.34.0 | 30 Sep 24 12:04 UTC | 30 Sep 24 12:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-499540            | embed-certs-499540           | jenkins | v1.34.0 | 30 Sep 24 12:05 UTC | 30 Sep 24 12:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-499540                                  | embed-certs-499540           | jenkins | v1.34.0 | 30 Sep 24 12:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-121479                              | old-k8s-version-121479       | jenkins | v1.34.0 | 30 Sep 24 12:05 UTC | 30 Sep 24 12:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-121479             | old-k8s-version-121479       | jenkins | v1.34.0 | 30 Sep 24 12:05 UTC | 30 Sep 24 12:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-121479                              | old-k8s-version-121479       | jenkins | v1.34.0 | 30 Sep 24 12:05 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-575582             | no-preload-575582            | jenkins | v1.34.0 | 30 Sep 24 12:06 UTC | 30 Sep 24 12:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-575582                                   | no-preload-575582            | jenkins | v1.34.0 | 30 Sep 24 12:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-836238                              | cert-expiration-836238       | jenkins | v1.34.0 | 30 Sep 24 12:07 UTC | 30 Sep 24 12:07 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-836238                              | cert-expiration-836238       | jenkins | v1.34.0 | 30 Sep 24 12:07 UTC | 30 Sep 24 12:07 UTC |
	| delete  | -p                                                     | disable-driver-mounts-909804 | jenkins | v1.34.0 | 30 Sep 24 12:07 UTC | 30 Sep 24 12:07 UTC |
	|         | disable-driver-mounts-909804                           |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-001996                           | kubernetes-upgrade-001996    | jenkins | v1.34.0 | 30 Sep 24 12:07 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-499540                 | embed-certs-499540           | jenkins | v1.34.0 | 30 Sep 24 12:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-499540                                  | embed-certs-499540           | jenkins | v1.34.0 | 30 Sep 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-575582                  | no-preload-575582            | jenkins | v1.34.0 | 30 Sep 24 12:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-575582                                   | no-preload-575582            | jenkins | v1.34.0 | 30 Sep 24 12:08 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-001996                           | kubernetes-upgrade-001996    | jenkins | v1.34.0 | 30 Sep 24 12:11 UTC | 30 Sep 24 12:11 UTC |
	| start   | -p kubernetes-upgrade-001996                           | kubernetes-upgrade-001996    | jenkins | v1.34.0 | 30 Sep 24 12:11 UTC | 30 Sep 24 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-001996                           | kubernetes-upgrade-001996    | jenkins | v1.34.0 | 30 Sep 24 12:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-001996                           | kubernetes-upgrade-001996    | jenkins | v1.34.0 | 30 Sep 24 12:13 UTC | 30 Sep 24 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 12:13:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 12:13:48.690787   61734 out.go:345] Setting OutFile to fd 1 ...
	I0930 12:13:48.691080   61734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 12:13:48.691090   61734 out.go:358] Setting ErrFile to fd 2...
	I0930 12:13:48.691095   61734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 12:13:48.691271   61734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 12:13:48.691845   61734 out.go:352] Setting JSON to false
	I0930 12:13:48.692734   61734 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6976,"bootTime":1727691453,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 12:13:48.692857   61734 start.go:139] virtualization: kvm guest
	I0930 12:13:48.694978   61734 out.go:177] * [kubernetes-upgrade-001996] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 12:13:48.696389   61734 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 12:13:48.696405   61734 notify.go:220] Checking for updates...
	I0930 12:13:48.698568   61734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 12:13:48.699718   61734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 12:13:48.700788   61734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 12:13:48.701977   61734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 12:13:48.703217   61734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 12:13:48.704948   61734 config.go:182] Loaded profile config "kubernetes-upgrade-001996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 12:13:48.705399   61734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 12:13:48.705456   61734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 12:13:48.721317   61734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36897
	I0930 12:13:48.721853   61734 main.go:141] libmachine: () Calling .GetVersion
	I0930 12:13:48.722415   61734 main.go:141] libmachine: Using API Version  1
	I0930 12:13:48.722446   61734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 12:13:48.722834   61734 main.go:141] libmachine: () Calling .GetMachineName
	I0930 12:13:48.723020   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:13:48.723255   61734 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 12:13:48.723562   61734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 12:13:48.723600   61734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 12:13:48.738667   61734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35267
	I0930 12:13:48.739071   61734 main.go:141] libmachine: () Calling .GetVersion
	I0930 12:13:48.739635   61734 main.go:141] libmachine: Using API Version  1
	I0930 12:13:48.739660   61734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 12:13:48.739954   61734 main.go:141] libmachine: () Calling .GetMachineName
	I0930 12:13:48.740143   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:13:48.778174   61734 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 12:13:48.779777   61734 start.go:297] selected driver: kvm2
	I0930 12:13:48.779798   61734 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 12:13:48.779945   61734 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 12:13:48.780614   61734 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 12:13:48.780690   61734 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 12:13:48.796397   61734 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 12:13:48.796774   61734 cni.go:84] Creating CNI manager for ""
	I0930 12:13:48.796818   61734 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 12:13:48.796852   61734 start.go:340] cluster config:
	{Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-001996 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 12:13:48.796950   61734 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 12:13:48.799596   61734 out.go:177] * Starting "kubernetes-upgrade-001996" primary control-plane node in "kubernetes-upgrade-001996" cluster
	I0930 12:13:46.893195   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:46.893655   60020 main.go:141] libmachine: (embed-certs-499540) DBG | unable to find current IP address of domain embed-certs-499540 in network mk-embed-certs-499540
	I0930 12:13:46.893679   60020 main.go:141] libmachine: (embed-certs-499540) DBG | I0930 12:13:46.893595   61503 retry.go:31] will retry after 3.060757327s: waiting for machine to come up
	I0930 12:13:46.881196   60223 pod_ready.go:103] pod "kube-apiserver-no-preload-575582" in "kube-system" namespace has status "Ready":"False"
	I0930 12:13:48.882433   60223 pod_ready.go:103] pod "kube-apiserver-no-preload-575582" in "kube-system" namespace has status "Ready":"False"
	I0930 12:13:49.380554   60223 pod_ready.go:93] pod "kube-apiserver-no-preload-575582" in "kube-system" namespace has status "Ready":"True"
	I0930 12:13:49.380580   60223 pod_ready.go:82] duration metric: took 4.506508394s for pod "kube-apiserver-no-preload-575582" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:49.380588   60223 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-575582" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:49.385150   60223 pod_ready.go:93] pod "kube-controller-manager-no-preload-575582" in "kube-system" namespace has status "Ready":"True"
	I0930 12:13:49.385170   60223 pod_ready.go:82] duration metric: took 4.575912ms for pod "kube-controller-manager-no-preload-575582" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:49.385177   60223 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-gt5s8" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:49.390283   60223 pod_ready.go:93] pod "kube-proxy-gt5s8" in "kube-system" namespace has status "Ready":"True"
	I0930 12:13:49.390304   60223 pod_ready.go:82] duration metric: took 5.120839ms for pod "kube-proxy-gt5s8" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:49.390312   60223 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-575582" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:49.394198   60223 pod_ready.go:93] pod "kube-scheduler-no-preload-575582" in "kube-system" namespace has status "Ready":"True"
	I0930 12:13:49.394219   60223 pod_ready.go:82] duration metric: took 3.899882ms for pod "kube-scheduler-no-preload-575582" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:49.394230   60223 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace to be "Ready" ...
	I0930 12:13:48.800762   61734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 12:13:48.800796   61734 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0930 12:13:48.800806   61734 cache.go:56] Caching tarball of preloaded images
	I0930 12:13:48.800885   61734 preload.go:172] Found /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0930 12:13:48.800898   61734 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0930 12:13:48.800985   61734 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/config.json ...
	I0930 12:13:48.801176   61734 start.go:360] acquireMachinesLock for kubernetes-upgrade-001996: {Name:mk00c8e95dd7ef3f3af18c831feaff25b009c005 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 12:13:49.955951   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:49.956586   60020 main.go:141] libmachine: (embed-certs-499540) DBG | unable to find current IP address of domain embed-certs-499540 in network mk-embed-certs-499540
	I0930 12:13:49.956628   60020 main.go:141] libmachine: (embed-certs-499540) DBG | I0930 12:13:49.956532   61503 retry.go:31] will retry after 4.416202117s: waiting for machine to come up
	I0930 12:13:54.375154   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.375844   60020 main.go:141] libmachine: (embed-certs-499540) Found IP for machine: 192.168.83.178
	I0930 12:13:54.375865   60020 main.go:141] libmachine: (embed-certs-499540) Reserving static IP address...
	I0930 12:13:54.375881   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has current primary IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.376217   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "embed-certs-499540", mac: "52:54:00:c9:94:05", ip: "192.168.83.178"} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.376242   60020 main.go:141] libmachine: (embed-certs-499540) DBG | skip adding static IP to network mk-embed-certs-499540 - found existing host DHCP lease matching {name: "embed-certs-499540", mac: "52:54:00:c9:94:05", ip: "192.168.83.178"}
	I0930 12:13:54.376255   60020 main.go:141] libmachine: (embed-certs-499540) Reserved static IP address: 192.168.83.178
	I0930 12:13:54.376270   60020 main.go:141] libmachine: (embed-certs-499540) Waiting for SSH to be available...
	I0930 12:13:54.376284   60020 main.go:141] libmachine: (embed-certs-499540) DBG | Getting to WaitForSSH function...
	I0930 12:13:54.378597   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.378947   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.378974   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.379132   60020 main.go:141] libmachine: (embed-certs-499540) DBG | Using SSH client type: external
	I0930 12:13:54.379162   60020 main.go:141] libmachine: (embed-certs-499540) DBG | Using SSH private key: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/embed-certs-499540/id_rsa (-rw-------)
	I0930 12:13:54.379201   60020 main.go:141] libmachine: (embed-certs-499540) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.178 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19734-3842/.minikube/machines/embed-certs-499540/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0930 12:13:54.379222   60020 main.go:141] libmachine: (embed-certs-499540) DBG | About to run SSH command:
	I0930 12:13:54.379241   60020 main.go:141] libmachine: (embed-certs-499540) DBG | exit 0
	I0930 12:13:54.514001   60020 main.go:141] libmachine: (embed-certs-499540) DBG | SSH cmd err, output: <nil>: 
	I0930 12:13:54.514364   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetConfigRaw
	I0930 12:13:54.515044   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetIP
	I0930 12:13:54.517868   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.518226   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.518247   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.518542   60020 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/config.json ...
	I0930 12:13:54.518806   60020 machine.go:93] provisionDockerMachine start ...
	I0930 12:13:54.518824   60020 main.go:141] libmachine: (embed-certs-499540) Calling .DriverName
	I0930 12:13:54.519040   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:54.521515   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.521896   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.521929   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.522040   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:54.522218   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:54.522367   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:54.522481   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:54.522630   60020 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:54.522850   60020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.178 22 <nil> <nil>}
	I0930 12:13:54.522865   60020 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 12:13:54.642085   60020 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0930 12:13:54.642125   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetMachineName
	I0930 12:13:54.642394   60020 buildroot.go:166] provisioning hostname "embed-certs-499540"
	I0930 12:13:54.642419   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetMachineName
	I0930 12:13:54.642596   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:54.645444   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.645822   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.645844   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.646027   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:54.646196   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:54.646307   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:54.646384   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:54.646500   60020 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:54.646662   60020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.178 22 <nil> <nil>}
	I0930 12:13:54.646674   60020 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-499540 && echo "embed-certs-499540" | sudo tee /etc/hostname
	I0930 12:13:54.780841   60020 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-499540
	
	I0930 12:13:54.780887   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:54.783561   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.783934   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.783970   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.784117   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:54.784323   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:54.784491   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:54.784632   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:54.784793   60020 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:54.784945   60020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.178 22 <nil> <nil>}
	I0930 12:13:54.784960   60020 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-499540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-499540/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-499540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 12:13:55.686871   61734 start.go:364] duration metric: took 6.885628303s to acquireMachinesLock for "kubernetes-upgrade-001996"
	I0930 12:13:55.686937   61734 start.go:96] Skipping create...Using existing machine configuration
	I0930 12:13:55.686949   61734 fix.go:54] fixHost starting: 
	I0930 12:13:55.687412   61734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 12:13:55.687475   61734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 12:13:55.707787   61734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45177
	I0930 12:13:55.708220   61734 main.go:141] libmachine: () Calling .GetVersion
	I0930 12:13:55.708780   61734 main.go:141] libmachine: Using API Version  1
	I0930 12:13:55.708808   61734 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 12:13:55.709127   61734 main.go:141] libmachine: () Calling .GetMachineName
	I0930 12:13:55.709347   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:13:55.709519   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetState
	I0930 12:13:55.711424   61734 fix.go:112] recreateIfNeeded on kubernetes-upgrade-001996: state=Running err=<nil>
	W0930 12:13:55.711467   61734 fix.go:138] unexpected machine state, will restart: <nil>
	I0930 12:13:55.713675   61734 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-001996" VM ...
	I0930 12:13:51.402181   60223 pod_ready.go:103] pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace has status "Ready":"False"
	I0930 12:13:53.902146   60223 pod_ready.go:103] pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace has status "Ready":"False"
	I0930 12:13:54.908513   60020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 12:13:54.908542   60020 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 12:13:54.908559   60020 buildroot.go:174] setting up certificates
	I0930 12:13:54.908569   60020 provision.go:84] configureAuth start
	I0930 12:13:54.908576   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetMachineName
	I0930 12:13:54.908836   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetIP
	I0930 12:13:54.911645   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.912110   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.912141   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.912273   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:54.914592   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.914920   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:54.914946   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:54.915092   60020 provision.go:143] copyHostCerts
	I0930 12:13:54.915168   60020 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 12:13:54.915181   60020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 12:13:54.915255   60020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 12:13:54.915373   60020 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 12:13:54.915383   60020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 12:13:54.915416   60020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 12:13:54.915496   60020 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 12:13:54.915505   60020 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 12:13:54.915534   60020 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 12:13:54.915601   60020 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.embed-certs-499540 san=[127.0.0.1 192.168.83.178 embed-certs-499540 localhost minikube]
	I0930 12:13:55.020690   60020 provision.go:177] copyRemoteCerts
	I0930 12:13:55.020744   60020 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 12:13:55.020767   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:55.023063   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.023360   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:55.023389   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.023556   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:55.023716   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.023837   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:55.023960   60020 sshutil.go:53] new ssh client: &{IP:192.168.83.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/embed-certs-499540/id_rsa Username:docker}
	I0930 12:13:55.113881   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 12:13:55.138789   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0930 12:13:55.163090   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0930 12:13:55.189901   60020 provision.go:87] duration metric: took 281.319942ms to configureAuth
	I0930 12:13:55.189933   60020 buildroot.go:189] setting minikube options for container-runtime
	I0930 12:13:55.190109   60020 config.go:182] Loaded profile config "embed-certs-499540": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 12:13:55.190172   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:55.192817   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.193148   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:55.193178   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.193353   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:55.193552   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.193736   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.193883   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:55.194036   60020 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:55.194203   60020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.178 22 <nil> <nil>}
	I0930 12:13:55.194217   60020 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 12:13:55.430446   60020 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 12:13:55.430477   60020 machine.go:96] duration metric: took 911.656604ms to provisionDockerMachine
	I0930 12:13:55.430492   60020 start.go:293] postStartSetup for "embed-certs-499540" (driver="kvm2")
	I0930 12:13:55.430516   60020 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 12:13:55.430541   60020 main.go:141] libmachine: (embed-certs-499540) Calling .DriverName
	I0930 12:13:55.430837   60020 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 12:13:55.430867   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:55.433896   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.434330   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:55.434372   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.434585   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:55.434766   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.434905   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:55.435014   60020 sshutil.go:53] new ssh client: &{IP:192.168.83.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/embed-certs-499540/id_rsa Username:docker}
	I0930 12:13:55.522837   60020 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 12:13:55.527079   60020 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 12:13:55.527107   60020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 12:13:55.527187   60020 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 12:13:55.527281   60020 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 12:13:55.527406   60020 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 12:13:55.537968   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 12:13:55.565384   60020 start.go:296] duration metric: took 134.87646ms for postStartSetup
	I0930 12:13:55.565421   60020 fix.go:56] duration metric: took 23.077138358s for fixHost
	I0930 12:13:55.565451   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:55.568178   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.568535   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:55.568584   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.568688   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:55.568877   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.569048   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.569264   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:55.569407   60020 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:55.569595   60020 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.83.178 22 <nil> <nil>}
	I0930 12:13:55.569609   60020 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 12:13:55.686684   60020 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727698435.662555040
	
	I0930 12:13:55.686716   60020 fix.go:216] guest clock: 1727698435.662555040
	I0930 12:13:55.686727   60020 fix.go:229] Guest: 2024-09-30 12:13:55.66255504 +0000 UTC Remote: 2024-09-30 12:13:55.565435245 +0000 UTC m=+340.740587691 (delta=97.119795ms)
	I0930 12:13:55.686755   60020 fix.go:200] guest clock delta is within tolerance: 97.119795ms
	I0930 12:13:55.686762   60020 start.go:83] releasing machines lock for "embed-certs-499540", held for 23.19851834s
	I0930 12:13:55.686794   60020 main.go:141] libmachine: (embed-certs-499540) Calling .DriverName
	I0930 12:13:55.687067   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetIP
	I0930 12:13:55.690024   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.690450   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:55.690487   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.690687   60020 main.go:141] libmachine: (embed-certs-499540) Calling .DriverName
	I0930 12:13:55.691325   60020 main.go:141] libmachine: (embed-certs-499540) Calling .DriverName
	I0930 12:13:55.691499   60020 main.go:141] libmachine: (embed-certs-499540) Calling .DriverName
	I0930 12:13:55.691573   60020 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 12:13:55.691621   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:55.691736   60020 ssh_runner.go:195] Run: cat /version.json
	I0930 12:13:55.691763   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHHostname
	I0930 12:13:55.694614   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.694926   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.695196   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:55.695233   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.695347   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:55.695387   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:55.695510   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:55.695678   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHPort
	I0930 12:13:55.695678   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.695868   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:55.695905   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHKeyPath
	I0930 12:13:55.696049   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetSSHUsername
	I0930 12:13:55.696070   60020 sshutil.go:53] new ssh client: &{IP:192.168.83.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/embed-certs-499540/id_rsa Username:docker}
	I0930 12:13:55.696169   60020 sshutil.go:53] new ssh client: &{IP:192.168.83.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/embed-certs-499540/id_rsa Username:docker}
	I0930 12:13:55.805749   60020 ssh_runner.go:195] Run: systemctl --version
	I0930 12:13:55.811933   60020 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 12:13:55.960247   60020 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 12:13:55.966909   60020 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 12:13:55.966989   60020 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 12:13:55.986451   60020 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0930 12:13:55.986476   60020 start.go:495] detecting cgroup driver to use...
	I0930 12:13:55.986554   60020 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 12:13:56.004592   60020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 12:13:56.020794   60020 docker.go:217] disabling cri-docker service (if available) ...
	I0930 12:13:56.020846   60020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 12:13:56.037066   60020 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 12:13:56.053141   60020 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 12:13:56.182176   60020 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 12:13:56.373415   60020 docker.go:233] disabling docker service ...
	I0930 12:13:56.373494   60020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 12:13:56.389895   60020 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 12:13:56.407434   60020 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 12:13:56.562818   60020 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 12:13:56.704742   60020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 12:13:56.722079   60020 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 12:13:56.746289   60020 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 12:13:56.746357   60020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:13:56.757899   60020 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 12:13:56.757960   60020 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:13:56.769098   60020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:13:56.780580   60020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:13:56.792126   60020 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 12:13:56.804074   60020 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:13:56.818411   60020 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:13:56.838938   60020 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:13:56.850561   60020 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 12:13:56.860669   60020 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 12:13:56.860727   60020 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 12:13:56.875519   60020 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 12:13:56.886301   60020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 12:13:57.011378   60020 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 12:13:57.122738   60020 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 12:13:57.122811   60020 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 12:13:57.127730   60020 start.go:563] Will wait 60s for crictl version
	I0930 12:13:57.127792   60020 ssh_runner.go:195] Run: which crictl
	I0930 12:13:57.131939   60020 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 12:13:57.169586   60020 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 12:13:57.169681   60020 ssh_runner.go:195] Run: crio --version
	I0930 12:13:57.198817   60020 ssh_runner.go:195] Run: crio --version
	I0930 12:13:57.230645   60020 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 12:13:55.715016   61734 machine.go:93] provisionDockerMachine start ...
	I0930 12:13:55.715040   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:13:55.715295   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:13:55.718506   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:55.718822   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:13:55.718865   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:55.718969   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:13:55.719135   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:55.719279   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:55.719377   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:13:55.719527   61734 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:55.719804   61734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:13:55.719822   61734 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 12:13:55.838615   61734 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-001996
	
	I0930 12:13:55.838645   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetMachineName
	I0930 12:13:55.838882   61734 buildroot.go:166] provisioning hostname "kubernetes-upgrade-001996"
	I0930 12:13:55.838914   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetMachineName
	I0930 12:13:55.839182   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:13:55.841998   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:55.842324   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:13:55.842344   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:55.842568   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:13:55.842777   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:55.842930   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:55.843054   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:13:55.843185   61734 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:55.843366   61734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:13:55.843379   61734 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-001996 && echo "kubernetes-upgrade-001996" | sudo tee /etc/hostname
	I0930 12:13:55.974252   61734 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-001996
	
	I0930 12:13:55.974283   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:13:55.977348   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:55.977856   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:13:55.977885   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:55.978084   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:13:55.978331   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:55.978497   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:55.978674   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:13:55.978834   61734 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:55.979015   61734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:13:55.979038   61734 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-001996' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-001996/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-001996' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 12:13:56.096747   61734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 12:13:56.096794   61734 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19734-3842/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-3842/.minikube}
	I0930 12:13:56.096840   61734 buildroot.go:174] setting up certificates
	I0930 12:13:56.096850   61734 provision.go:84] configureAuth start
	I0930 12:13:56.096861   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetMachineName
	I0930 12:13:56.097083   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetIP
	I0930 12:13:56.100220   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.100631   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:13:56.100659   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.100849   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:13:56.103502   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.103876   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:13:56.103898   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.104080   61734 provision.go:143] copyHostCerts
	I0930 12:13:56.104135   61734 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem, removing ...
	I0930 12:13:56.104144   61734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem
	I0930 12:13:56.104194   61734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/key.pem (1679 bytes)
	I0930 12:13:56.104334   61734 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem, removing ...
	I0930 12:13:56.104344   61734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem
	I0930 12:13:56.104367   61734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/ca.pem (1082 bytes)
	I0930 12:13:56.104447   61734 exec_runner.go:144] found /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem, removing ...
	I0930 12:13:56.104454   61734 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem
	I0930 12:13:56.104478   61734 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-3842/.minikube/cert.pem (1123 bytes)
	I0930 12:13:56.104567   61734 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-001996 san=[127.0.0.1 192.168.50.128 kubernetes-upgrade-001996 localhost minikube]
	I0930 12:13:56.335422   61734 provision.go:177] copyRemoteCerts
	I0930 12:13:56.335481   61734 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 12:13:56.335507   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:13:56.337975   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.338324   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:13:56.338351   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.338478   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:13:56.338702   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:56.338850   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:13:56.338982   61734 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:13:56.431244   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 12:13:56.461165   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0930 12:13:56.502015   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 12:13:56.534425   61734 provision.go:87] duration metric: took 437.560855ms to configureAuth
	I0930 12:13:56.534455   61734 buildroot.go:189] setting minikube options for container-runtime
	I0930 12:13:56.534675   61734 config.go:182] Loaded profile config "kubernetes-upgrade-001996": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 12:13:56.534761   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:13:56.537512   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.537959   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:13:56.537993   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:13:56.538194   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:13:56.538396   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:56.538556   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:13:56.538724   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:13:56.538892   61734 main.go:141] libmachine: Using SSH client type: native
	I0930 12:13:56.539079   61734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:13:56.539100   61734 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0930 12:13:57.231884   60020 main.go:141] libmachine: (embed-certs-499540) Calling .GetIP
	I0930 12:13:57.235122   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:57.235512   60020 main.go:141] libmachine: (embed-certs-499540) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:94:05", ip: ""} in network mk-embed-certs-499540: {Iface:virbr3 ExpiryTime:2024-09-30 13:13:45 +0000 UTC Type:0 Mac:52:54:00:c9:94:05 Iaid: IPaddr:192.168.83.178 Prefix:24 Hostname:embed-certs-499540 Clientid:01:52:54:00:c9:94:05}
	I0930 12:13:57.235561   60020 main.go:141] libmachine: (embed-certs-499540) DBG | domain embed-certs-499540 has defined IP address 192.168.83.178 and MAC address 52:54:00:c9:94:05 in network mk-embed-certs-499540
	I0930 12:13:57.235714   60020 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0930 12:13:57.241514   60020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 12:13:57.254550   60020 kubeadm.go:883] updating cluster {Name:embed-certs-499540 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-499540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.178 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 12:13:57.254714   60020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 12:13:57.254779   60020 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 12:13:57.296281   60020 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0930 12:13:57.296366   60020 ssh_runner.go:195] Run: which lz4
	I0930 12:13:57.300645   60020 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0930 12:13:57.304905   60020 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0930 12:13:57.304942   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0930 12:13:58.791669   60020 crio.go:462] duration metric: took 1.491055869s to copy over tarball
	I0930 12:13:58.791753   60020 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0930 12:13:56.401140   60223 pod_ready.go:103] pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace has status "Ready":"False"
	I0930 12:13:58.404180   60223 pod_ready.go:103] pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace has status "Ready":"False"
	I0930 12:14:00.901862   60223 pod_ready.go:103] pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace has status "Ready":"False"
	I0930 12:14:03.199138   61734 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0930 12:14:03.199164   61734 machine.go:96] duration metric: took 7.484132831s to provisionDockerMachine
	I0930 12:14:03.199177   61734 start.go:293] postStartSetup for "kubernetes-upgrade-001996" (driver="kvm2")
	I0930 12:14:03.199189   61734 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 12:14:03.199206   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:14:03.199659   61734 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 12:14:03.199692   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:14:03.202502   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.202840   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:14:03.202871   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.202988   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:14:03.203159   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:14:03.203396   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:14:03.203543   61734 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:14:03.289128   61734 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 12:14:03.293472   61734 info.go:137] Remote host: Buildroot 2023.02.9
	I0930 12:14:03.293510   61734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/addons for local assets ...
	I0930 12:14:03.293586   61734 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3842/.minikube/files for local assets ...
	I0930 12:14:03.293716   61734 filesync.go:149] local asset: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem -> 110092.pem in /etc/ssl/certs
	I0930 12:14:03.293856   61734 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0930 12:14:03.304841   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /etc/ssl/certs/110092.pem (1708 bytes)
	I0930 12:14:03.334651   61734 start.go:296] duration metric: took 135.459713ms for postStartSetup
	I0930 12:14:03.334705   61734 fix.go:56] duration metric: took 7.647756183s for fixHost
	I0930 12:14:03.334730   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:14:03.337529   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.337891   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:14:03.337934   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.338063   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:14:03.338248   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:14:03.338430   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:14:03.338575   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:14:03.338739   61734 main.go:141] libmachine: Using SSH client type: native
	I0930 12:14:03.338933   61734 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I0930 12:14:03.338946   61734 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0930 12:14:03.451392   61734 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727698443.429814841
	
	I0930 12:14:03.451417   61734 fix.go:216] guest clock: 1727698443.429814841
	I0930 12:14:03.451428   61734 fix.go:229] Guest: 2024-09-30 12:14:03.429814841 +0000 UTC Remote: 2024-09-30 12:14:03.33471074 +0000 UTC m=+14.682959155 (delta=95.104101ms)
	I0930 12:14:03.451492   61734 fix.go:200] guest clock delta is within tolerance: 95.104101ms
	I0930 12:14:03.451504   61734 start.go:83] releasing machines lock for "kubernetes-upgrade-001996", held for 7.76460066s
	I0930 12:14:03.451532   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:14:03.451795   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetIP
	I0930 12:14:03.454319   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.454712   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:14:03.454743   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.454906   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:14:03.455422   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:14:03.455588   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .DriverName
	I0930 12:14:03.455664   61734 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 12:14:03.455725   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:14:03.455826   61734 ssh_runner.go:195] Run: cat /version.json
	I0930 12:14:03.455863   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHHostname
	I0930 12:14:03.458374   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.458752   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:14:03.458775   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.458798   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.458974   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:14:03.459180   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:14:03.459247   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:14:03.459280   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:03.459317   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:14:03.459426   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHPort
	I0930 12:14:03.459482   61734 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:14:03.459554   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHKeyPath
	I0930 12:14:03.459679   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetSSHUsername
	I0930 12:14:03.459807   61734 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/kubernetes-upgrade-001996/id_rsa Username:docker}
	I0930 12:14:03.539481   61734 ssh_runner.go:195] Run: systemctl --version
	I0930 12:14:03.567352   61734 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0930 12:14:00.930993   60020 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.139207599s)
	I0930 12:14:00.931020   60020 crio.go:469] duration metric: took 2.139316899s to extract the tarball
	I0930 12:14:00.931027   60020 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0930 12:14:00.967822   60020 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 12:14:01.018075   60020 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 12:14:01.018097   60020 cache_images.go:84] Images are preloaded, skipping loading
	I0930 12:14:01.018104   60020 kubeadm.go:934] updating node { 192.168.83.178 8443 v1.31.1 crio true true} ...
	I0930 12:14:01.018187   60020 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-499540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.178
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-499540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 12:14:01.018253   60020 ssh_runner.go:195] Run: crio config
	I0930 12:14:01.064860   60020 cni.go:84] Creating CNI manager for ""
	I0930 12:14:01.064890   60020 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 12:14:01.064901   60020 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 12:14:01.064927   60020 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.178 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-499540 NodeName:embed-certs-499540 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.178"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.178 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 12:14:01.065064   60020 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.178
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-499540"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.178
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.178"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 12:14:01.065133   60020 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 12:14:01.083370   60020 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 12:14:01.083430   60020 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 12:14:01.093496   60020 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0930 12:14:01.112982   60020 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 12:14:01.131281   60020 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0930 12:14:01.150263   60020 ssh_runner.go:195] Run: grep 192.168.83.178	control-plane.minikube.internal$ /etc/hosts
	I0930 12:14:01.154117   60020 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.178	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 12:14:01.168672   60020 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 12:14:01.297363   60020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 12:14:01.314128   60020 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540 for IP: 192.168.83.178
	I0930 12:14:01.314155   60020 certs.go:194] generating shared ca certs ...
	I0930 12:14:01.314176   60020 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:14:01.314352   60020 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 12:14:01.314423   60020 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 12:14:01.314441   60020 certs.go:256] generating profile certs ...
	I0930 12:14:01.314534   60020 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/client.key
	I0930 12:14:01.314605   60020 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/apiserver.key.734ce782
	I0930 12:14:01.314665   60020 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/proxy-client.key
	I0930 12:14:01.314821   60020 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 12:14:01.314866   60020 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 12:14:01.314880   60020 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 12:14:01.314914   60020 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 12:14:01.314948   60020 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 12:14:01.314980   60020 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 12:14:01.315030   60020 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 12:14:01.315920   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 12:14:01.349152   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 12:14:01.386555   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 12:14:01.415307   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 12:14:01.453215   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0930 12:14:01.479788   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 12:14:01.512006   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 12:14:01.541926   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/embed-certs-499540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 12:14:01.568089   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 12:14:01.595019   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 12:14:01.622108   60020 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 12:14:01.648495   60020 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 12:14:01.666890   60020 ssh_runner.go:195] Run: openssl version
	I0930 12:14:01.673202   60020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 12:14:01.685120   60020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 12:14:01.690246   60020 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 12:14:01.690314   60020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 12:14:01.698066   60020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 12:14:01.710062   60020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 12:14:01.721919   60020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 12:14:01.727253   60020 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 12:14:01.727322   60020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 12:14:01.733828   60020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 12:14:01.746064   60020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 12:14:01.758572   60020 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:14:01.763876   60020 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:14:01.763973   60020 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:14:01.770511   60020 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 12:14:01.782694   60020 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 12:14:01.788017   60020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 12:14:01.796766   60020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 12:14:01.803583   60020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 12:14:01.812265   60020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 12:14:01.819013   60020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 12:14:01.825975   60020 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 12:14:01.832761   60020 kubeadm.go:392] StartCluster: {Name:embed-certs-499540 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-499540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.178 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 12:14:01.832873   60020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 12:14:01.832942   60020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 12:14:01.875674   60020 cri.go:89] found id: ""
	I0930 12:14:01.875751   60020 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 12:14:01.888475   60020 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0930 12:14:01.888504   60020 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0930 12:14:01.888610   60020 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0930 12:14:01.902532   60020 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0930 12:14:01.903823   60020 kubeconfig.go:125] found "embed-certs-499540" server: "https://192.168.83.178:8443"
	I0930 12:14:01.906832   60020 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0930 12:14:01.920810   60020 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.83.178
	I0930 12:14:01.920848   60020 kubeadm.go:1160] stopping kube-system containers ...
	I0930 12:14:01.920861   60020 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0930 12:14:01.920915   60020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 12:14:01.967872   60020 cri.go:89] found id: ""
	I0930 12:14:01.967946   60020 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0930 12:14:01.985602   60020 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 12:14:01.996255   60020 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 12:14:01.996282   60020 kubeadm.go:157] found existing configuration files:
	
	I0930 12:14:01.996324   60020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 12:14:02.006092   60020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 12:14:02.006181   60020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 12:14:02.016445   60020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 12:14:02.026957   60020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 12:14:02.027055   60020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 12:14:02.039431   60020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 12:14:02.052500   60020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 12:14:02.052580   60020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 12:14:02.063800   60020 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 12:14:02.074028   60020 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 12:14:02.074115   60020 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 12:14:02.085374   60020 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 12:14:02.096172   60020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 12:14:02.224856   60020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 12:14:03.219628   60020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0930 12:14:03.442207   60020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0930 12:14:03.520284   60020 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0930 12:14:03.608646   60020 api_server.go:52] waiting for apiserver process to appear ...
	I0930 12:14:03.608734   60020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 12:14:04.109472   60020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 12:14:04.609044   60020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 12:14:04.626865   60020 api_server.go:72] duration metric: took 1.018216329s to wait for apiserver process to appear ...
	I0930 12:14:04.626914   60020 api_server.go:88] waiting for apiserver healthz status ...
	I0930 12:14:04.626943   60020 api_server.go:253] Checking apiserver healthz at https://192.168.83.178:8443/healthz ...
	I0930 12:14:04.627488   60020 api_server.go:269] stopped: https://192.168.83.178:8443/healthz: Get "https://192.168.83.178:8443/healthz": dial tcp 192.168.83.178:8443: connect: connection refused
	I0930 12:14:02.903382   60223 pod_ready.go:103] pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace has status "Ready":"False"
	I0930 12:14:04.906199   60223 pod_ready.go:103] pod "metrics-server-6867b74b74-h55hq" in "kube-system" namespace has status "Ready":"False"
	I0930 12:14:03.820355   61734 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0930 12:14:03.827351   61734 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 12:14:03.827419   61734 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 12:14:03.838718   61734 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 12:14:03.838746   61734 start.go:495] detecting cgroup driver to use...
	I0930 12:14:03.838817   61734 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0930 12:14:03.860503   61734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0930 12:14:03.877002   61734 docker.go:217] disabling cri-docker service (if available) ...
	I0930 12:14:03.877068   61734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0930 12:14:03.895877   61734 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0930 12:14:03.912811   61734 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0930 12:14:04.099690   61734 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0930 12:14:04.378582   61734 docker.go:233] disabling docker service ...
	I0930 12:14:04.378659   61734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0930 12:14:04.445570   61734 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0930 12:14:04.545016   61734 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0930 12:14:04.848594   61734 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0930 12:14:05.262995   61734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0930 12:14:05.302640   61734 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 12:14:05.368972   61734 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0930 12:14:05.369043   61734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:14:05.387103   61734 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0930 12:14:05.387181   61734 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:14:05.406991   61734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:14:05.425541   61734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:14:05.450406   61734 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 12:14:05.474928   61734 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:14:05.516728   61734 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:14:05.571434   61734 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0930 12:14:05.656733   61734 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 12:14:05.693258   61734 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 12:14:05.708950   61734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 12:14:05.949397   61734 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0930 12:14:06.748823   61734 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0930 12:14:06.748889   61734 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0930 12:14:06.753730   61734 start.go:563] Will wait 60s for crictl version
	I0930 12:14:06.753788   61734 ssh_runner.go:195] Run: which crictl
	I0930 12:14:06.758090   61734 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 12:14:06.791694   61734 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0930 12:14:06.791783   61734 ssh_runner.go:195] Run: crio --version
	I0930 12:14:06.821546   61734 ssh_runner.go:195] Run: crio --version
	I0930 12:14:06.854197   61734 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0930 12:14:06.855467   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) Calling .GetIP
	I0930 12:14:06.857858   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:06.858188   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:28:6c", ip: ""} in network mk-kubernetes-upgrade-001996: {Iface:virbr2 ExpiryTime:2024-09-30 13:13:24 +0000 UTC Type:0 Mac:52:54:00:68:28:6c Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:kubernetes-upgrade-001996 Clientid:01:52:54:00:68:28:6c}
	I0930 12:14:06.858239   61734 main.go:141] libmachine: (kubernetes-upgrade-001996) DBG | domain kubernetes-upgrade-001996 has defined IP address 192.168.50.128 and MAC address 52:54:00:68:28:6c in network mk-kubernetes-upgrade-001996
	I0930 12:14:06.858398   61734 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0930 12:14:06.862752   61734 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 12:14:06.862861   61734 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0930 12:14:06.862903   61734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 12:14:06.909549   61734 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 12:14:06.909572   61734 crio.go:433] Images already preloaded, skipping extraction
	I0930 12:14:06.909628   61734 ssh_runner.go:195] Run: sudo crictl images --output json
	I0930 12:14:06.944499   61734 crio.go:514] all images are preloaded for cri-o runtime.
	I0930 12:14:06.944521   61734 cache_images.go:84] Images are preloaded, skipping loading
	I0930 12:14:06.944528   61734 kubeadm.go:934] updating node { 192.168.50.128 8443 v1.31.1 crio true true} ...
	I0930 12:14:06.944614   61734 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-001996 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 12:14:06.944674   61734 ssh_runner.go:195] Run: crio config
	I0930 12:14:06.996126   61734 cni.go:84] Creating CNI manager for ""
	I0930 12:14:06.996156   61734 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 12:14:06.996167   61734 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 12:14:06.996195   61734 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-001996 NodeName:kubernetes-upgrade-001996 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 12:14:06.996322   61734 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-001996"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 12:14:06.996379   61734 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 12:14:07.007479   61734 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 12:14:07.007557   61734 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 12:14:07.024302   61734 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0930 12:14:07.089089   61734 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 12:14:07.111187   61734 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0930 12:14:07.148042   61734 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I0930 12:14:07.171425   61734 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 12:14:07.443579   61734 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 12:14:07.523177   61734 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996 for IP: 192.168.50.128
	I0930 12:14:07.523201   61734 certs.go:194] generating shared ca certs ...
	I0930 12:14:07.523221   61734 certs.go:226] acquiring lock for ca certs: {Name:mk6e8575843c22b815a3b102bdc4a520434e3b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 12:14:07.523431   61734 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key
	I0930 12:14:07.523493   61734 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key
	I0930 12:14:07.523507   61734 certs.go:256] generating profile certs ...
	I0930 12:14:07.523619   61734 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/client.key
	I0930 12:14:07.523689   61734 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key.d740bd78
	I0930 12:14:07.523752   61734 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.key
	I0930 12:14:07.523905   61734 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem (1338 bytes)
	W0930 12:14:07.523948   61734 certs.go:480] ignoring /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009_empty.pem, impossibly tiny 0 bytes
	I0930 12:14:07.523961   61734 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 12:14:07.523995   61734 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/ca.pem (1082 bytes)
	I0930 12:14:07.524028   61734 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/cert.pem (1123 bytes)
	I0930 12:14:07.524058   61734 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/certs/key.pem (1679 bytes)
	I0930 12:14:07.524113   61734 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem (1708 bytes)
	I0930 12:14:07.525004   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 12:14:07.743847   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 12:14:07.837824   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 12:14:07.881780   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 12:14:07.929110   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0930 12:14:07.975494   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 12:14:08.006629   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 12:14:08.099540   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/kubernetes-upgrade-001996/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0930 12:14:08.141240   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 12:14:08.197322   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/certs/11009.pem --> /usr/share/ca-certificates/11009.pem (1338 bytes)
	I0930 12:14:08.228898   61734 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/ssl/certs/110092.pem --> /usr/share/ca-certificates/110092.pem (1708 bytes)
	I0930 12:14:08.260152   61734 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 12:14:08.285124   61734 ssh_runner.go:195] Run: openssl version
	I0930 12:14:08.323590   61734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/110092.pem && ln -fs /usr/share/ca-certificates/110092.pem /etc/ssl/certs/110092.pem"
	I0930 12:14:08.341334   61734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/110092.pem
	I0930 12:14:08.346190   61734 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 30 11:01 /usr/share/ca-certificates/110092.pem
	I0930 12:14:08.346263   61734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/110092.pem
	I0930 12:14:08.352601   61734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/110092.pem /etc/ssl/certs/3ec20f2e.0"
	I0930 12:14:08.363569   61734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 12:14:08.375732   61734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:14:08.380891   61734 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:14:08.380954   61734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 12:14:08.387118   61734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 12:14:08.399315   61734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11009.pem && ln -fs /usr/share/ca-certificates/11009.pem /etc/ssl/certs/11009.pem"
	I0930 12:14:08.413145   61734 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11009.pem
	I0930 12:14:08.418739   61734 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 30 11:01 /usr/share/ca-certificates/11009.pem
	I0930 12:14:08.418804   61734 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11009.pem
	I0930 12:14:08.425285   61734 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11009.pem /etc/ssl/certs/51391683.0"
	I0930 12:14:08.436475   61734 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 12:14:08.441838   61734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0930 12:14:08.448021   61734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0930 12:14:08.454654   61734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0930 12:14:08.461026   61734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0930 12:14:08.467188   61734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0930 12:14:08.473098   61734 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0930 12:14:08.479368   61734 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-001996 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-001996 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 12:14:08.479443   61734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0930 12:14:08.479538   61734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0930 12:14:08.520256   61734 cri.go:89] found id: "ba32018f9200efc1dcf22f7fee27b62a1ba082ee2ed01fc20a31fd2d7eee6b8d"
	I0930 12:14:08.520283   61734 cri.go:89] found id: "df3d5c002b2742ff4653cc7b33826038477cfc6f140999ca91a9b444e66ef7c1"
	I0930 12:14:08.520299   61734 cri.go:89] found id: "915b82944d744fa5b172040ba42b76c157d9241d03984b24fd9ecc55d398a868"
	I0930 12:14:08.520304   61734 cri.go:89] found id: "656a5b25b583c9cc59f8ab9e328c2c9e4e1d18026de49a691ae2ef7f1ed523cb"
	I0930 12:14:08.520308   61734 cri.go:89] found id: "4709b72a83e12bc3ea9d1d6c8f136bbd7fee371891771bf59eef5cbd60b350e0"
	I0930 12:14:08.520313   61734 cri.go:89] found id: "87b757e3120dcfd55b8bc24f1440cd8dc02384f937f2504fc39e1bdf9779daf7"
	I0930 12:14:08.520317   61734 cri.go:89] found id: "0df5a4117689e51dc6a9a7d88f35b406e7855dd758c0f3f01b16b590d51f685e"
	I0930 12:14:08.520321   61734 cri.go:89] found id: "2ccfe5497eb33edec87d829cfc1c8e2c0bec6ce1a73fcfa9e4e64bfb42e95ef0"
	I0930 12:14:08.520325   61734 cri.go:89] found id: ""
	I0930 12:14:08.520384   61734 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.511417901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9cd9004-683c-4a5f-ad3d-ba891821cc15 name=/runtime.v1.RuntimeService/Version
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.513579611Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bdb9b171-25b8-42bf-abad-e7e7ed041652 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.514263786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727698458514235156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bdb9b171-25b8-42bf-abad-e7e7ed041652 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.515099834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0be2620f-8239-4d1e-9a57-d1c70d460ed3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.515161146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0be2620f-8239-4d1e-9a57-d1c70d460ed3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.515570866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fb1c0f598491a08c9f8fa5ddb31736293666d15c15c59072b11dd4d99e4e0f3,PodSandboxId:2a2d0237af8adeeb3db81abced635dedc51fbf28c6901ca308a3729bfdee0953,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727698455719380747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10243ef5dec0abc609ea81e23cd27d66e1e2c068a5c2017c1ddffb5f86ed51f7,PodSandboxId:1c1a37aaa4cf1e0cf853e051cf602dfdbac70ea742c6cf47e7d588da01516278,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455702534017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6485f5e05ddd21285c62316cf73e4494c46505d6e3d19511e7bc35c4d2396709,PodSandboxId:b16878981ef6759e46bd961f16b91f3576c0c8a0c64f30f65414e854b8ad90f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727698455709487434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cc4683764313e9b3ecc463df9e0d9b4bab640537318be2d3c919858cc6ba4,PodSandboxId:8e36b99beb753a23c48a046a0684044247a0ab2ce2a6136bc844d458d1c67d15,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455680508440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59
004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d988f735396d9061727690da9d7f0c3707b2f548a1f90be2e469237a630e014a,PodSandboxId:85c148d3000cfde5a78b79b03d25df33041abfcdcbf1cdc2f59f5eeac81393f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727698450885582714,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0ce1ec5eea2165975d7c1b2101980536b18a47defc44c581147ea4a366f86c,PodSandboxId:66e231d29781a497a0161b018360f3d43fa7ac609cc17c7c28750ea31523e899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727698450883351
285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8fcb0e6394193532133a0def59355932567bfaca51583ebcc94c5a38e32929,PodSandboxId:1a92272e6f359a79ab7f65b2a46c39b4b47c738a358b6850f9bbd9ec225c7dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7698450871456618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa7035bdb3db5e0104b0b2ee31dbbca807e31ac1620b15e067fecbe16b1fb64,PodSandboxId:34cfce8fb0ac4d6ffe130ef38413c84504570078226c081e8755f54bb75a9496,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172769845085181251
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915b82944d744fa5b172040ba42b76c157d9241d03984b24fd9ecc55d398a868,PodSandboxId:1051e8f73cf5b039fd41f2924e70aeab9121982361f35313f9c5b441043ad764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727698444975983201,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3d5c002b2742ff4653cc7b33826038477cfc6f140999ca91a9b444e66ef7c1,PodSandboxId:1fa8c57117c29ec2eee22df7d31bc1a9f733b9cdd1da8fa902278aa28b5d26af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445600316447,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4709b72a83e12bc3ea9d1d6c8f136bbd7fee371891771bf59eef5cbd60b350e0,PodSandboxId:f5a3b0bc8d708c047c4fcfb908335bbe1174b334a264e3c29c567b00352eb477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727698444735369968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba32018f9200efc1dcf22f7fee27b62a1ba082ee2ed01fc20a31fd2d7eee6b8d,PodSandboxId:88a12ef365d10f30ee53ffadd34e5da1493c62ca39726ef0ca98b7dad3be51d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445715551043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b757e3120dcfd55b8bc24f1440cd8dc02384f937f2504fc39e1bdf9779daf7,PodSandboxId:a10d04d8359e9366bd250140d562c7170861909daf08f99e1ccf20d89bc53205,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727698444683035291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656a5b25b583c9cc59f8ab9e328c2c9e4e1d18026de49a691ae2ef7f1ed523cb,PodSandboxId:4312ab259ede5b5312dd81613dfcac005e222f32bcf3ef01768f18ba51acd5eb,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727698444774026425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df5a4117689e51dc6a9a7d88f35b406e7855dd758c0f3f01b16b590d51f685e,PodSandboxId:e1618cd6c719109b91106c6750ab66aa1a5ff0e848bd4566ccb7890d449605ca,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727698444650833779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ccfe5497eb33edec87d829cfc1c8e2c0bec6ce1a73fcfa9e4e64bfb42e95ef0,PodSandboxId:c3eb21d7a2edcb7021a1b8b9dad4b048f9cddd2e67d64037a420ccf2f326fcc3,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727698444493921101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0be2620f-8239-4d1e-9a57-d1c70d460ed3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.566644866Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=804a1021-7e62-4e0d-b4fd-91d368ba382e name=/runtime.v1.RuntimeService/Version
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.566725829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=804a1021-7e62-4e0d-b4fd-91d368ba382e name=/runtime.v1.RuntimeService/Version
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.567859553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60960066-098e-48d6-907c-2221877e581c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.568603822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727698458568571747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60960066-098e-48d6-907c-2221877e581c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.571241760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f75b1592-8a41-4b54-8588-4afdb139c092 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.571536812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f75b1592-8a41-4b54-8588-4afdb139c092 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.572477165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fb1c0f598491a08c9f8fa5ddb31736293666d15c15c59072b11dd4d99e4e0f3,PodSandboxId:2a2d0237af8adeeb3db81abced635dedc51fbf28c6901ca308a3729bfdee0953,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727698455719380747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10243ef5dec0abc609ea81e23cd27d66e1e2c068a5c2017c1ddffb5f86ed51f7,PodSandboxId:1c1a37aaa4cf1e0cf853e051cf602dfdbac70ea742c6cf47e7d588da01516278,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455702534017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6485f5e05ddd21285c62316cf73e4494c46505d6e3d19511e7bc35c4d2396709,PodSandboxId:b16878981ef6759e46bd961f16b91f3576c0c8a0c64f30f65414e854b8ad90f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727698455709487434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cc4683764313e9b3ecc463df9e0d9b4bab640537318be2d3c919858cc6ba4,PodSandboxId:8e36b99beb753a23c48a046a0684044247a0ab2ce2a6136bc844d458d1c67d15,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455680508440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59
004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d988f735396d9061727690da9d7f0c3707b2f548a1f90be2e469237a630e014a,PodSandboxId:85c148d3000cfde5a78b79b03d25df33041abfcdcbf1cdc2f59f5eeac81393f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727698450885582714,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0ce1ec5eea2165975d7c1b2101980536b18a47defc44c581147ea4a366f86c,PodSandboxId:66e231d29781a497a0161b018360f3d43fa7ac609cc17c7c28750ea31523e899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727698450883351
285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8fcb0e6394193532133a0def59355932567bfaca51583ebcc94c5a38e32929,PodSandboxId:1a92272e6f359a79ab7f65b2a46c39b4b47c738a358b6850f9bbd9ec225c7dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7698450871456618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa7035bdb3db5e0104b0b2ee31dbbca807e31ac1620b15e067fecbe16b1fb64,PodSandboxId:34cfce8fb0ac4d6ffe130ef38413c84504570078226c081e8755f54bb75a9496,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172769845085181251
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915b82944d744fa5b172040ba42b76c157d9241d03984b24fd9ecc55d398a868,PodSandboxId:1051e8f73cf5b039fd41f2924e70aeab9121982361f35313f9c5b441043ad764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727698444975983201,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3d5c002b2742ff4653cc7b33826038477cfc6f140999ca91a9b444e66ef7c1,PodSandboxId:1fa8c57117c29ec2eee22df7d31bc1a9f733b9cdd1da8fa902278aa28b5d26af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445600316447,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4709b72a83e12bc3ea9d1d6c8f136bbd7fee371891771bf59eef5cbd60b350e0,PodSandboxId:f5a3b0bc8d708c047c4fcfb908335bbe1174b334a264e3c29c567b00352eb477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727698444735369968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba32018f9200efc1dcf22f7fee27b62a1ba082ee2ed01fc20a31fd2d7eee6b8d,PodSandboxId:88a12ef365d10f30ee53ffadd34e5da1493c62ca39726ef0ca98b7dad3be51d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445715551043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b757e3120dcfd55b8bc24f1440cd8dc02384f937f2504fc39e1bdf9779daf7,PodSandboxId:a10d04d8359e9366bd250140d562c7170861909daf08f99e1ccf20d89bc53205,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727698444683035291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656a5b25b583c9cc59f8ab9e328c2c9e4e1d18026de49a691ae2ef7f1ed523cb,PodSandboxId:4312ab259ede5b5312dd81613dfcac005e222f32bcf3ef01768f18ba51acd5eb,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727698444774026425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df5a4117689e51dc6a9a7d88f35b406e7855dd758c0f3f01b16b590d51f685e,PodSandboxId:e1618cd6c719109b91106c6750ab66aa1a5ff0e848bd4566ccb7890d449605ca,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727698444650833779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ccfe5497eb33edec87d829cfc1c8e2c0bec6ce1a73fcfa9e4e64bfb42e95ef0,PodSandboxId:c3eb21d7a2edcb7021a1b8b9dad4b048f9cddd2e67d64037a420ccf2f326fcc3,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727698444493921101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f75b1592-8a41-4b54-8588-4afdb139c092 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.623133227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=246921f1-04a0-4fc6-9e60-1da77b04542a name=/runtime.v1.RuntimeService/Version
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.623234792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=246921f1-04a0-4fc6-9e60-1da77b04542a name=/runtime.v1.RuntimeService/Version
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.624638272Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=284a2d0c-db2c-4b2e-87c2-3ebd5196ebf7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.625003614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727698458624981948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=284a2d0c-db2c-4b2e-87c2-3ebd5196ebf7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.625615869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6500b40f-bd14-4fbf-8107-721903356aa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.625697938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6500b40f-bd14-4fbf-8107-721903356aa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.626020814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fb1c0f598491a08c9f8fa5ddb31736293666d15c15c59072b11dd4d99e4e0f3,PodSandboxId:2a2d0237af8adeeb3db81abced635dedc51fbf28c6901ca308a3729bfdee0953,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727698455719380747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10243ef5dec0abc609ea81e23cd27d66e1e2c068a5c2017c1ddffb5f86ed51f7,PodSandboxId:1c1a37aaa4cf1e0cf853e051cf602dfdbac70ea742c6cf47e7d588da01516278,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455702534017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6485f5e05ddd21285c62316cf73e4494c46505d6e3d19511e7bc35c4d2396709,PodSandboxId:b16878981ef6759e46bd961f16b91f3576c0c8a0c64f30f65414e854b8ad90f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727698455709487434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cc4683764313e9b3ecc463df9e0d9b4bab640537318be2d3c919858cc6ba4,PodSandboxId:8e36b99beb753a23c48a046a0684044247a0ab2ce2a6136bc844d458d1c67d15,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455680508440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59
004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d988f735396d9061727690da9d7f0c3707b2f548a1f90be2e469237a630e014a,PodSandboxId:85c148d3000cfde5a78b79b03d25df33041abfcdcbf1cdc2f59f5eeac81393f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727698450885582714,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0ce1ec5eea2165975d7c1b2101980536b18a47defc44c581147ea4a366f86c,PodSandboxId:66e231d29781a497a0161b018360f3d43fa7ac609cc17c7c28750ea31523e899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727698450883351
285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8fcb0e6394193532133a0def59355932567bfaca51583ebcc94c5a38e32929,PodSandboxId:1a92272e6f359a79ab7f65b2a46c39b4b47c738a358b6850f9bbd9ec225c7dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7698450871456618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa7035bdb3db5e0104b0b2ee31dbbca807e31ac1620b15e067fecbe16b1fb64,PodSandboxId:34cfce8fb0ac4d6ffe130ef38413c84504570078226c081e8755f54bb75a9496,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172769845085181251
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915b82944d744fa5b172040ba42b76c157d9241d03984b24fd9ecc55d398a868,PodSandboxId:1051e8f73cf5b039fd41f2924e70aeab9121982361f35313f9c5b441043ad764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727698444975983201,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3d5c002b2742ff4653cc7b33826038477cfc6f140999ca91a9b444e66ef7c1,PodSandboxId:1fa8c57117c29ec2eee22df7d31bc1a9f733b9cdd1da8fa902278aa28b5d26af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445600316447,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4709b72a83e12bc3ea9d1d6c8f136bbd7fee371891771bf59eef5cbd60b350e0,PodSandboxId:f5a3b0bc8d708c047c4fcfb908335bbe1174b334a264e3c29c567b00352eb477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727698444735369968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba32018f9200efc1dcf22f7fee27b62a1ba082ee2ed01fc20a31fd2d7eee6b8d,PodSandboxId:88a12ef365d10f30ee53ffadd34e5da1493c62ca39726ef0ca98b7dad3be51d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445715551043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b757e3120dcfd55b8bc24f1440cd8dc02384f937f2504fc39e1bdf9779daf7,PodSandboxId:a10d04d8359e9366bd250140d562c7170861909daf08f99e1ccf20d89bc53205,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727698444683035291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656a5b25b583c9cc59f8ab9e328c2c9e4e1d18026de49a691ae2ef7f1ed523cb,PodSandboxId:4312ab259ede5b5312dd81613dfcac005e222f32bcf3ef01768f18ba51acd5eb,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727698444774026425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df5a4117689e51dc6a9a7d88f35b406e7855dd758c0f3f01b16b590d51f685e,PodSandboxId:e1618cd6c719109b91106c6750ab66aa1a5ff0e848bd4566ccb7890d449605ca,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727698444650833779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ccfe5497eb33edec87d829cfc1c8e2c0bec6ce1a73fcfa9e4e64bfb42e95ef0,PodSandboxId:c3eb21d7a2edcb7021a1b8b9dad4b048f9cddd2e67d64037a420ccf2f326fcc3,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727698444493921101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6500b40f-bd14-4fbf-8107-721903356aa9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.645921401Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6175f91d-a693-4388-a27e-379b395050ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.646324628Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8e36b99beb753a23c48a046a0684044247a0ab2ce2a6136bc844d458d1c67d15,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-khrgn,Uid:7ec20584-0301-4803-b327-59004a300503,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447700012693,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59004a300503,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T12:13:52.368561879Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c1a37aaa4cf1e0cf853e051cf602dfdbac70ea742c6cf47e7d588da01516278,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zm8xq,Uid:c769f450-7786-4dac-803a-e45a85a5b7ac,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447675628652,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T12:13:52.340149709Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34cfce8fb0ac4d6ffe130ef38413c84504570078226c081e8755f54bb75a9496,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-001996,Uid:a46b992821ca1c67cd60b984cee81cc0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447330425335,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,tier: control-plane,},Annotations:map[string]string{kub
eadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.128:2379,kubernetes.io/config.hash: a46b992821ca1c67cd60b984cee81cc0,kubernetes.io/config.seen: 2024-09-30T12:13:41.455143837Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a2d0237af8adeeb3db81abced635dedc51fbf28c6901ca308a3729bfdee0953,Metadata:&PodSandboxMetadata{Name:kube-proxy-l9xz8,Uid:d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447260878931,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T12:13:51.825033480Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b16878981ef6759e46bd961f16b91f3576c0c8a0c64f30f65414e854b8ad90f2,Metadata:&Pod
SandboxMetadata{Name:storage-provisioner,Uid:b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447175837376,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"
tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T12:13:51.366203986Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a92272e6f359a79ab7f65b2a46c39b4b47c738a358b6850f9bbd9ec225c7dc4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-001996,Uid:3b8cb06fea61442c5e45a989400f6f6e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447124940398,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.128:8443,kubernetes.io/config.hash: 3b8cb06fea61442c5e45a989400f6f6e,kubernetes.io/config.s
een: 2024-09-30T12:13:41.332862807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:66e231d29781a497a0161b018360f3d43fa7ac609cc17c7c28750ea31523e899,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-001996,Uid:5619f147629b4da22e94465661fd2c1c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447065414039,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5619f147629b4da22e94465661fd2c1c,kubernetes.io/config.seen: 2024-09-30T12:13:41.332867964Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:85c148d3000cfde5a78b79b03d25df33041abfcdcbf1cdc2f59f5eeac81393f9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-001996,Uid:7
f2be7e219fb48ac9c8ad299ebabe094,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727698447046129111,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7f2be7e219fb48ac9c8ad299ebabe094,kubernetes.io/config.seen: 2024-09-30T12:13:41.332869278Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4312ab259ede5b5312dd81613dfcac005e222f32bcf3ef01768f18ba51acd5eb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-001996,Uid:5619f147629b4da22e94465661fd2c1c,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444122614038,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kuberne
tes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5619f147629b4da22e94465661fd2c1c,kubernetes.io/config.seen: 2024-09-30T12:13:41.332867964Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fa8c57117c29ec2eee22df7d31bc1a9f733b9cdd1da8fa902278aa28b5d26af,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-khrgn,Uid:7ec20584-0301-4803-b327-59004a300503,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444115375191,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59004a300503,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T12:13:52.368561879Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10
51e8f73cf5b039fd41f2924e70aeab9121982361f35313f9c5b441043ad764,Metadata:&PodSandboxMetadata{Name:kube-proxy-l9xz8,Uid:d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444114607935,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T12:13:51.825033480Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a10d04d8359e9366bd250140d562c7170861909daf08f99e1ccf20d89bc53205,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-001996,Uid:7f2be7e219fb48ac9c8ad299ebabe094,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444105011450,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.
name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7f2be7e219fb48ac9c8ad299ebabe094,kubernetes.io/config.seen: 2024-09-30T12:13:41.332869278Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:88a12ef365d10f30ee53ffadd34e5da1493c62ca39726ef0ca98b7dad3be51d4,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zm8xq,Uid:c769f450-7786-4dac-803a-e45a85a5b7ac,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444099867795,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-30T12:13:52.340149709Z,kubernetes.io/
config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5a3b0bc8d708c047c4fcfb908335bbe1174b334a264e3c29c567b00352eb477,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444096431028,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provi
sioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-30T12:13:51.366203986Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e1618cd6c719109b91106c6750ab66aa1a5ff0e848bd4566ccb7890d449605ca,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-001996,Uid:3b8cb06fea61442c5e45a989400f6f6e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444094304485,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-ap
iserver.advertise-address.endpoint: 192.168.50.128:8443,kubernetes.io/config.hash: 3b8cb06fea61442c5e45a989400f6f6e,kubernetes.io/config.seen: 2024-09-30T12:13:41.332862807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3eb21d7a2edcb7021a1b8b9dad4b048f9cddd2e67d64037a420ccf2f326fcc3,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-001996,Uid:a46b992821ca1c67cd60b984cee81cc0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727698444013254702,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.128:2379,kubernetes.io/config.hash: a46b992821ca1c67cd60b984cee81cc0,kubernetes.io/config.seen: 2024-09-30T12:13:41.455143837Z,kubernetes.io/config.source: file,},RuntimeHandler:,
},},}" file="otel-collector/interceptors.go:74" id=6175f91d-a693-4388-a27e-379b395050ed name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.647709579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f64fb8b-4471-4970-8661-7d47248ead85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.647889170Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f64fb8b-4471-4970-8661-7d47248ead85 name=/runtime.v1.RuntimeService/ListContainers
	Sep 30 12:14:18 kubernetes-upgrade-001996 crio[3044]: time="2024-09-30 12:14:18.649878697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4fb1c0f598491a08c9f8fa5ddb31736293666d15c15c59072b11dd4d99e4e0f3,PodSandboxId:2a2d0237af8adeeb3db81abced635dedc51fbf28c6901ca308a3729bfdee0953,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727698455719380747,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10243ef5dec0abc609ea81e23cd27d66e1e2c068a5c2017c1ddffb5f86ed51f7,PodSandboxId:1c1a37aaa4cf1e0cf853e051cf602dfdbac70ea742c6cf47e7d588da01516278,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455702534017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6485f5e05ddd21285c62316cf73e4494c46505d6e3d19511e7bc35c4d2396709,PodSandboxId:b16878981ef6759e46bd961f16b91f3576c0c8a0c64f30f65414e854b8ad90f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727698455709487434,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50cc4683764313e9b3ecc463df9e0d9b4bab640537318be2d3c919858cc6ba4,PodSandboxId:8e36b99beb753a23c48a046a0684044247a0ab2ce2a6136bc844d458d1c67d15,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727698455680508440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59
004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d988f735396d9061727690da9d7f0c3707b2f548a1f90be2e469237a630e014a,PodSandboxId:85c148d3000cfde5a78b79b03d25df33041abfcdcbf1cdc2f59f5eeac81393f9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727698450885582714,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0ce1ec5eea2165975d7c1b2101980536b18a47defc44c581147ea4a366f86c,PodSandboxId:66e231d29781a497a0161b018360f3d43fa7ac609cc17c7c28750ea31523e899,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727698450883351
285,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed8fcb0e6394193532133a0def59355932567bfaca51583ebcc94c5a38e32929,PodSandboxId:1a92272e6f359a79ab7f65b2a46c39b4b47c738a358b6850f9bbd9ec225c7dc4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
7698450871456618,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa7035bdb3db5e0104b0b2ee31dbbca807e31ac1620b15e067fecbe16b1fb64,PodSandboxId:34cfce8fb0ac4d6ffe130ef38413c84504570078226c081e8755f54bb75a9496,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172769845085181251
5,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915b82944d744fa5b172040ba42b76c157d9241d03984b24fd9ecc55d398a868,PodSandboxId:1051e8f73cf5b039fd41f2924e70aeab9121982361f35313f9c5b441043ad764,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727698444975983201,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9xz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e856b0-f7f9-4890-a13f-7b20a6e22aa8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df3d5c002b2742ff4653cc7b33826038477cfc6f140999ca91a9b444e66ef7c1,PodSandboxId:1fa8c57117c29ec2eee22df7d31bc1a9f733b9cdd1da8fa902278aa28b5d26af,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445600316447,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7c65d6cfc9-khrgn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ec20584-0301-4803-b327-59004a300503,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4709b72a83e12bc3ea9d1d6c8f136bbd7fee371891771bf59eef5cbd60b350e0,PodSandboxId:f5a3b0bc8d708c047c4fcfb908335bbe1174b334a264e3c29c567b00352eb477,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runt
imeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727698444735369968,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba32018f9200efc1dcf22f7fee27b62a1ba082ee2ed01fc20a31fd2d7eee6b8d,PodSandboxId:88a12ef365d10f30ee53ffadd34e5da1493c62ca39726ef0ca98b7dad3be51d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727698445715551043,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zm8xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c769f450-7786-4dac-803a-e45a85a5b7ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87b757e3120dcfd55b8bc24f1440cd8dc02384f937f2504fc39e1bdf9779daf7,PodSandboxId:a10d04d8359e9366bd250140d562c7170861909daf08f99e1ccf20d89bc53205,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727698444683035291,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f2be7e219fb48ac9c8ad299ebabe094,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:656a5b25b583c9cc59f8ab9e328c2c9e4e1d18026de49a691ae2ef7f1ed523cb,PodSandboxId:4312ab259ede5b5312dd81613dfcac005e222f32bcf3ef01768f18ba51acd5eb,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727698444774026425,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5619f147629b4da22e94465661fd2c1c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0df5a4117689e51dc6a9a7d88f35b406e7855dd758c0f3f01b16b590d51f685e,PodSandboxId:e1618cd6c719109b91106c6750ab66aa1a5ff0e848bd4566ccb7890d449605ca,Metadata:&Con
tainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727698444650833779,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b8cb06fea61442c5e45a989400f6f6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ccfe5497eb33edec87d829cfc1c8e2c0bec6ce1a73fcfa9e4e64bfb42e95ef0,PodSandboxId:c3eb21d7a2edcb7021a1b8b9dad4b048f9cddd2e67d64037a420ccf2f326fcc3,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727698444493921101,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-001996,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46b992821ca1c67cd60b984cee81cc0,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f64fb8b-4471-4970-8661-7d47248ead85 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4fb1c0f598491       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   2 seconds ago       Running             kube-proxy                2                   2a2d0237af8ad       kube-proxy-l9xz8
	6485f5e05ddd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   b16878981ef67       storage-provisioner
	10243ef5dec0a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   1c1a37aaa4cf1       coredns-7c65d6cfc9-zm8xq
	e50cc46837643       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   8e36b99beb753       coredns-7c65d6cfc9-khrgn
	d988f735396d9       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   85c148d3000cf       kube-scheduler-kubernetes-upgrade-001996
	9b0ce1ec5eea2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   66e231d29781a       kube-controller-manager-kubernetes-upgrade-001996
	ed8fcb0e63941       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   1a92272e6f359       kube-apiserver-kubernetes-upgrade-001996
	caa7035bdb3db       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   34cfce8fb0ac4       etcd-kubernetes-upgrade-001996
	ba32018f9200e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Exited              coredns                   1                   88a12ef365d10       coredns-7c65d6cfc9-zm8xq
	df3d5c002b274       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Exited              coredns                   1                   1fa8c57117c29       coredns-7c65d6cfc9-khrgn
	915b82944d744       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   13 seconds ago      Exited              kube-proxy                1                   1051e8f73cf5b       kube-proxy-l9xz8
	656a5b25b583c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   13 seconds ago      Exited              kube-controller-manager   1                   4312ab259ede5       kube-controller-manager-kubernetes-upgrade-001996
	4709b72a83e12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       1                   f5a3b0bc8d708       storage-provisioner
	87b757e3120dc       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 seconds ago      Exited              kube-scheduler            1                   a10d04d8359e9       kube-scheduler-kubernetes-upgrade-001996
	0df5a4117689e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 seconds ago      Exited              kube-apiserver            1                   e1618cd6c7191       kube-apiserver-kubernetes-upgrade-001996
	2ccfe5497eb33       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 seconds ago      Exited              etcd                      1                   c3eb21d7a2edc       etcd-kubernetes-upgrade-001996
	
	
	==> coredns [10243ef5dec0abc609ea81e23cd27d66e1e2c068a5c2017c1ddffb5f86ed51f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ba32018f9200efc1dcf22f7fee27b62a1ba082ee2ed01fc20a31fd2d7eee6b8d] <==
	
	
	==> coredns [df3d5c002b2742ff4653cc7b33826038477cfc6f140999ca91a9b444e66ef7c1] <==
	
	
	==> coredns [e50cc4683764313e9b3ecc463df9e0d9b4bab640537318be2d3c919858cc6ba4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-001996
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-001996
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 12:13:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-001996
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 12:14:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 12:14:14 +0000   Mon, 30 Sep 2024 12:13:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 12:14:14 +0000   Mon, 30 Sep 2024 12:13:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 12:14:14 +0000   Mon, 30 Sep 2024 12:13:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 12:14:14 +0000   Mon, 30 Sep 2024 12:13:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.128
	  Hostname:    kubernetes-upgrade-001996
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 107cd4aa7c454eaa9b9e6a3bdf58046c
	  System UUID:                107cd4aa-7c45-4eaa-9b9e-6a3bdf58046c
	  Boot ID:                    35531b2a-454b-42bb-9d76-a1467dfbadea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-khrgn                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27s
	  kube-system                 coredns-7c65d6cfc9-zm8xq                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27s
	  kube-system                 etcd-kubernetes-upgrade-001996                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         30s
	  kube-system                 kube-apiserver-kubernetes-upgrade-001996             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-001996    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-l9xz8                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-kubernetes-upgrade-001996             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node kubernetes-upgrade-001996 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node kubernetes-upgrade-001996 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x7 over 38s)  kubelet          Node kubernetes-upgrade-001996 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s                node-controller  Node kubernetes-upgrade-001996 event: Registered Node kubernetes-upgrade-001996 in Controller
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-001996 event: Registered Node kubernetes-upgrade-001996 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.225919] systemd-fstab-generator[556]: Ignoring "noauto" option for root device
	[  +0.079166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080649] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.268015] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.183132] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.380620] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +4.677050] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +0.067611] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.430626] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +6.622084] systemd-fstab-generator[1202]: Ignoring "noauto" option for root device
	[  +0.085056] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.998996] kauditd_printk_skb: 58 callbacks suppressed
	[Sep30 12:14] kauditd_printk_skb: 40 callbacks suppressed
	[  +0.266327] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +0.244947] systemd-fstab-generator[2291]: Ignoring "noauto" option for root device
	[  +0.502949] systemd-fstab-generator[2508]: Ignoring "noauto" option for root device
	[  +0.358319] systemd-fstab-generator[2659]: Ignoring "noauto" option for root device
	[  +0.717169] systemd-fstab-generator[2859]: Ignoring "noauto" option for root device
	[  +1.488019] systemd-fstab-generator[3281]: Ignoring "noauto" option for root device
	[  +2.839962] systemd-fstab-generator[3817]: Ignoring "noauto" option for root device
	[  +0.086904] kauditd_printk_skb: 302 callbacks suppressed
	[  +5.576840] kauditd_printk_skb: 40 callbacks suppressed
	[  +0.762273] systemd-fstab-generator[4351]: Ignoring "noauto" option for root device
	
	
	==> etcd [2ccfe5497eb33edec87d829cfc1c8e2c0bec6ce1a73fcfa9e4e64bfb42e95ef0] <==
	{"level":"info","ts":"2024-09-30T12:14:05.098835Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-30T12:14:05.152909Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"cd7de093209a1f5d","local-member-id":"b7d726258a4a2d44","commit-index":390}
	{"level":"info","ts":"2024-09-30T12:14:05.153342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-30T12:14:05.153370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became follower at term 2"}
	{"level":"info","ts":"2024-09-30T12:14:05.153381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b7d726258a4a2d44 [peers: [], term: 2, commit: 390, applied: 0, lastindex: 390, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-30T12:14:05.174213Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-30T12:14:05.241902Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":382}
	{"level":"info","ts":"2024-09-30T12:14:05.258158Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-30T12:14:05.280979Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b7d726258a4a2d44","timeout":"7s"}
	{"level":"info","ts":"2024-09-30T12:14:05.299098Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b7d726258a4a2d44"}
	{"level":"info","ts":"2024-09-30T12:14:05.299223Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"b7d726258a4a2d44","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-30T12:14:05.299586Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-30T12:14:05.299724Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T12:14:05.299774Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T12:14:05.299783Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T12:14:05.312437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 switched to configuration voters=(13247098771609365828)"}
	{"level":"info","ts":"2024-09-30T12:14:05.312607Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd7de093209a1f5d","local-member-id":"b7d726258a4a2d44","added-peer-id":"b7d726258a4a2d44","added-peer-peer-urls":["https://192.168.50.128:2380"]}
	{"level":"info","ts":"2024-09-30T12:14:05.312763Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd7de093209a1f5d","local-member-id":"b7d726258a4a2d44","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T12:14:05.312813Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T12:14:05.350511Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T12:14:05.353559Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-30T12:14:05.353759Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b7d726258a4a2d44","initial-advertise-peer-urls":["https://192.168.50.128:2380"],"listen-peer-urls":["https://192.168.50.128:2380"],"advertise-client-urls":["https://192.168.50.128:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.128:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T12:14:05.353781Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T12:14:05.353885Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.128:2380"}
	{"level":"info","ts":"2024-09-30T12:14:05.353891Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.128:2380"}
	
	
	==> etcd [caa7035bdb3db5e0104b0b2ee31dbbca807e31ac1620b15e067fecbe16b1fb64] <==
	{"level":"info","ts":"2024-09-30T12:14:11.280293Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.128:2380"}
	{"level":"info","ts":"2024-09-30T12:14:11.280560Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b7d726258a4a2d44","initial-advertise-peer-urls":["https://192.168.50.128:2380"],"listen-peer-urls":["https://192.168.50.128:2380"],"advertise-client-urls":["https://192.168.50.128:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.128:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-30T12:14:11.280577Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-30T12:14:11.282160Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T12:14:11.282481Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-30T12:14:11.282290Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd7de093209a1f5d","local-member-id":"b7d726258a4a2d44","added-peer-id":"b7d726258a4a2d44","added-peer-peer-urls":["https://192.168.50.128:2380"]}
	{"level":"info","ts":"2024-09-30T12:14:11.282782Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd7de093209a1f5d","local-member-id":"b7d726258a4a2d44","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T12:14:11.282911Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T12:14:11.282369Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.128:2380"}
	{"level":"info","ts":"2024-09-30T12:14:12.850610Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-30T12:14:12.850768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-30T12:14:12.850821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 received MsgPreVoteResp from b7d726258a4a2d44 at term 2"}
	{"level":"info","ts":"2024-09-30T12:14:12.850874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became candidate at term 3"}
	{"level":"info","ts":"2024-09-30T12:14:12.850905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 received MsgVoteResp from b7d726258a4a2d44 at term 3"}
	{"level":"info","ts":"2024-09-30T12:14:12.850939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became leader at term 3"}
	{"level":"info","ts":"2024-09-30T12:14:12.851097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b7d726258a4a2d44 elected leader b7d726258a4a2d44 at term 3"}
	{"level":"info","ts":"2024-09-30T12:14:12.856823Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b7d726258a4a2d44","local-member-attributes":"{Name:kubernetes-upgrade-001996 ClientURLs:[https://192.168.50.128:2379]}","request-path":"/0/members/b7d726258a4a2d44/attributes","cluster-id":"cd7de093209a1f5d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T12:14:12.856960Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T12:14:12.857415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T12:14:12.858447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T12:14:12.859729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T12:14:12.860812Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T12:14:12.862204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.128:2379"}
	{"level":"info","ts":"2024-09-30T12:14:12.860916Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T12:14:12.930493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 12:14:19 up 1 min,  0 users,  load average: 2.33, 0.61, 0.21
	Linux kubernetes-upgrade-001996 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0df5a4117689e51dc6a9a7d88f35b406e7855dd758c0f3f01b16b590d51f685e] <==
	I0930 12:14:05.627964       1 options.go:228] external host was not specified, using 192.168.50.128
	I0930 12:14:05.638325       1 server.go:142] Version: v1.31.1
	I0930 12:14:05.638376       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [ed8fcb0e6394193532133a0def59355932567bfaca51583ebcc94c5a38e32929] <==
	I0930 12:14:14.517663       1 shared_informer.go:320] Caches are synced for configmaps
	I0930 12:14:14.545125       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0930 12:14:14.545199       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0930 12:14:14.545207       1 policy_source.go:224] refreshing policies
	I0930 12:14:14.546867       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0930 12:14:14.549326       1 aggregator.go:171] initial CRD sync complete...
	I0930 12:14:14.549426       1 autoregister_controller.go:144] Starting autoregister controller
	I0930 12:14:14.549464       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0930 12:14:14.549497       1 cache.go:39] Caches are synced for autoregister controller
	E0930 12:14:14.555012       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0930 12:14:14.567869       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0930 12:14:14.606688       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0930 12:14:14.607316       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0930 12:14:14.609767       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0930 12:14:14.611205       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0930 12:14:14.611343       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0930 12:14:14.634719       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0930 12:14:15.422745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0930 12:14:15.989214       1 controller.go:615] quota admission added evaluator for: endpoints
	I0930 12:14:16.233380       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0930 12:14:16.250959       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0930 12:14:16.311266       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0930 12:14:16.402209       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0930 12:14:16.413980       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0930 12:14:18.060458       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [656a5b25b583c9cc59f8ab9e328c2c9e4e1d18026de49a691ae2ef7f1ed523cb] <==
	
	
	==> kube-controller-manager [9b0ce1ec5eea2165975d7c1b2101980536b18a47defc44c581147ea4a366f86c] <==
	I0930 12:14:17.870184       1 shared_informer.go:320] Caches are synced for namespace
	I0930 12:14:17.870418       1 shared_informer.go:320] Caches are synced for stateful set
	I0930 12:14:17.870604       1 shared_informer.go:320] Caches are synced for crt configmap
	I0930 12:14:17.873687       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0930 12:14:17.877134       1 shared_informer.go:320] Caches are synced for job
	I0930 12:14:17.879413       1 shared_informer.go:320] Caches are synced for cronjob
	I0930 12:14:17.886285       1 shared_informer.go:320] Caches are synced for GC
	I0930 12:14:17.890233       1 shared_informer.go:320] Caches are synced for PVC protection
	I0930 12:14:17.891567       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.823389ms"
	I0930 12:14:17.892441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="104.713µs"
	I0930 12:14:17.892575       1 shared_informer.go:320] Caches are synced for ephemeral
	I0930 12:14:17.900182       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0930 12:14:17.906561       1 shared_informer.go:320] Caches are synced for deployment
	I0930 12:14:17.928602       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0930 12:14:17.944263       1 shared_informer.go:320] Caches are synced for disruption
	I0930 12:14:17.969523       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 12:14:18.001692       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 12:14:18.017718       1 shared_informer.go:320] Caches are synced for endpoint
	I0930 12:14:18.029292       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0930 12:14:18.085429       1 shared_informer.go:320] Caches are synced for attach detach
	I0930 12:14:18.513990       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 12:14:18.555440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0930 12:14:18.555467       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0930 12:14:19.204770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.96473ms"
	I0930 12:14:19.210888       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="112.029µs"
	
	
	==> kube-proxy [4fb1c0f598491a08c9f8fa5ddb31736293666d15c15c59072b11dd4d99e4e0f3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0930 12:14:16.194226       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0930 12:14:16.206413       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.128"]
	E0930 12:14:16.206478       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 12:14:16.261198       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0930 12:14:16.261327       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0930 12:14:16.261367       1 server_linux.go:169] "Using iptables Proxier"
	I0930 12:14:16.267998       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 12:14:16.268813       1 server.go:483] "Version info" version="v1.31.1"
	I0930 12:14:16.269208       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 12:14:16.270874       1 config.go:199] "Starting service config controller"
	I0930 12:14:16.272138       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 12:14:16.271262       1 config.go:105] "Starting endpoint slice config controller"
	I0930 12:14:16.272498       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 12:14:16.271929       1 config.go:328] "Starting node config controller"
	I0930 12:14:16.272510       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 12:14:16.372491       1 shared_informer.go:320] Caches are synced for service config
	I0930 12:14:16.372639       1 shared_informer.go:320] Caches are synced for node config
	I0930 12:14:16.372650       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [915b82944d744fa5b172040ba42b76c157d9241d03984b24fd9ecc55d398a868] <==
	
	
	==> kube-scheduler [87b757e3120dcfd55b8bc24f1440cd8dc02384f937f2504fc39e1bdf9779daf7] <==
	
	
	==> kube-scheduler [d988f735396d9061727690da9d7f0c3707b2f548a1f90be2e469237a630e014a] <==
	I0930 12:14:12.575253       1 serving.go:386] Generated self-signed cert in-memory
	I0930 12:14:14.562906       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0930 12:14:14.566261       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 12:14:14.584905       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0930 12:14:14.585117       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0930 12:14:14.585438       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0930 12:14:14.585540       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 12:14:14.585573       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0930 12:14:14.585666       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0930 12:14:14.590953       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0930 12:14:14.594332       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0930 12:14:14.685512       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0930 12:14:14.685725       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0930 12:14:14.686288       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Sep 30 12:14:10 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:10.835673    3824 scope.go:117] "RemoveContainer" containerID="2ccfe5497eb33edec87d829cfc1c8e2c0bec6ce1a73fcfa9e4e64bfb42e95ef0"
	Sep 30 12:14:10 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:10.837607    3824 scope.go:117] "RemoveContainer" containerID="0df5a4117689e51dc6a9a7d88f35b406e7855dd758c0f3f01b16b590d51f685e"
	Sep 30 12:14:10 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:10.838133    3824 scope.go:117] "RemoveContainer" containerID="656a5b25b583c9cc59f8ab9e328c2c9e4e1d18026de49a691ae2ef7f1ed523cb"
	Sep 30 12:14:10 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:10.840396    3824 scope.go:117] "RemoveContainer" containerID="87b757e3120dcfd55b8bc24f1440cd8dc02384f937f2504fc39e1bdf9779daf7"
	Sep 30 12:14:10 kubernetes-upgrade-001996 kubelet[3824]: E0930 12:14:10.955871    3824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-001996?timeout=10s\": dial tcp 192.168.50.128:8443: connect: connection refused" interval="800ms"
	Sep 30 12:14:11 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:11.161342    3824 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-001996"
	Sep 30 12:14:11 kubernetes-upgrade-001996 kubelet[3824]: E0930 12:14:11.162206    3824 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.128:8443: connect: connection refused" node="kubernetes-upgrade-001996"
	Sep 30 12:14:11 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:11.964971    3824 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-001996"
	Sep 30 12:14:14 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:14.583345    3824 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-001996"
	Sep 30 12:14:14 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:14.583486    3824 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-001996"
	Sep 30 12:14:14 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:14.583512    3824 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 30 12:14:14 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:14.584642    3824 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 30 12:14:14 kubernetes-upgrade-001996 kubelet[3824]: E0930 12:14:14.650168    3824 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-001996\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-001996"
	Sep 30 12:14:14 kubernetes-upgrade-001996 kubelet[3824]: E0930 12:14:14.968532    3824 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-001996\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-001996"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: E0930 12:14:15.147531    3824 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-001996\" already exists" pod="kube-system/etcd-kubernetes-upgrade-001996"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.341825    3824 apiserver.go:52] "Watching apiserver"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.444246    3824 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.509882    3824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3e856b0-f7f9-4890-a13f-7b20a6e22aa8-xtables-lock\") pod \"kube-proxy-l9xz8\" (UID: \"d3e856b0-f7f9-4890-a13f-7b20a6e22aa8\") " pod="kube-system/kube-proxy-l9xz8"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.510310    3824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2-tmp\") pod \"storage-provisioner\" (UID: \"b5fb78e5-87f7-40ea-91eb-dea1b1d8ada2\") " pod="kube-system/storage-provisioner"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.510658    3824 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3e856b0-f7f9-4890-a13f-7b20a6e22aa8-lib-modules\") pod \"kube-proxy-l9xz8\" (UID: \"d3e856b0-f7f9-4890-a13f-7b20a6e22aa8\") " pod="kube-system/kube-proxy-l9xz8"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.646529    3824 scope.go:117] "RemoveContainer" containerID="915b82944d744fa5b172040ba42b76c157d9241d03984b24fd9ecc55d398a868"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.646929    3824 scope.go:117] "RemoveContainer" containerID="ba32018f9200efc1dcf22f7fee27b62a1ba082ee2ed01fc20a31fd2d7eee6b8d"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.647502    3824 scope.go:117] "RemoveContainer" containerID="df3d5c002b2742ff4653cc7b33826038477cfc6f140999ca91a9b444e66ef7c1"
	Sep 30 12:14:15 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:15.648099    3824 scope.go:117] "RemoveContainer" containerID="4709b72a83e12bc3ea9d1d6c8f136bbd7fee371891771bf59eef5cbd60b350e0"
	Sep 30 12:14:19 kubernetes-upgrade-001996 kubelet[3824]: I0930 12:14:19.143446    3824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [4709b72a83e12bc3ea9d1d6c8f136bbd7fee371891771bf59eef5cbd60b350e0] <==
	
	
	==> storage-provisioner [6485f5e05ddd21285c62316cf73e4494c46505d6e3d19511e7bc35c4d2396709] <==
	I0930 12:14:15.940259       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 12:14:15.964406       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 12:14:15.964721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 12:14:16.022942       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 12:14:16.023221       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-001996_2645f9b5-8df9-4d94-8834-509668fc7dff!
	I0930 12:14:16.026290       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"76ba79b6-ffb7-410e-a82b-e495c09f24f6", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-001996_2645f9b5-8df9-4d94-8834-509668fc7dff became leader
	I0930 12:14:16.124274       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-001996_2645f9b5-8df9-4d94-8834-509668fc7dff!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 12:14:18.046619   62069 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19734-3842/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-001996 -n kubernetes-upgrade-001996
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-001996 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-001996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-001996
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-001996: (1.119634122s)
--- FAIL: TestKubernetesUpgrade (412.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.054s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
E0930 12:15:18.064110   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
E0930 12:20:18.063689   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.51:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.51:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (19m19s)
		TestStartStop (20m43s)
		TestStartStop/group/default-k8s-diff-port (6m3s)
		TestStartStop/group/default-k8s-diff-port/serial (6m3s)
		TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1m58s)
		TestStartStop/group/embed-certs (15m52s)
		TestStartStop/group/embed-certs/serial (15m52s)
		TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (29s)
		TestStartStop/group/no-preload (16m55s)
		TestStartStop/group/no-preload/serial (16m55s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (1m45s)
		TestStartStop/group/old-k8s-version (20m43s)
		TestStartStop/group/old-k8s-version/serial (20m43s)
		TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6m10s)

                                                
                                                
goroutine 3459 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000194b60, 0xc0006d9bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000012840, {0x4590140, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x464c680?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00078ac80)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00078ac80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0005ecf80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2708 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233700, 0xc000064310}, 0xc001da1750, 0xc001da1798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233700, 0xc000064310}, 0x1c?, 0xc001da1750, 0xc001da1798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233700?, 0xc000064310?}, 0xc001548000?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001da17d0?, 0x593ba4?, 0xc001e00780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2608
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 383 [chan receive, 71 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000686e80, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 335
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 369 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 368
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3279 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x32334f8, 0xc00003f6c0}, {0x3227490, 0xc0017bc6a0}, 0x1, 0x0, 0xc001299c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x32334f8?, 0xc0004549a0?}, 0x3b9aca00, 0xc001793e10?, 0x1, 0xc001793c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x32334f8, 0xc0004549a0}, 0xc0000fdd40, {0xc00005fa70, 0x12}, {0x25ae3cf, 0x14}, {0x25c17e0, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x32334f8, 0xc0004549a0}, 0xc0000fdd40, {0xc00005fa70, 0x12}, {0x2599819?, 0xc001d9e760?}, {0x559033?, 0x4b162f?}, {0xc00056e100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000fdd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000fdd40, 0xc0012e2600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2324
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1717 [chan receive, 20 minutes]:
testing.(*T).Run(0xc0007ae820, {0x258dd97?, 0x55917c?}, 0xc0013c0810)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0007ae820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0007ae820, 0x2f12e10)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2499 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000686a10, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0006d7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324cb00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000686a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b5a010, {0x320f1c0, 0xc001ca42a0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b5a010, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 216 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7fd8027450a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0001c4600?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001c4600)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0001c4600)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0007ea880)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0007ea880)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc0007eda40, {0x3226e30, 0xc0007ea880})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc0007eda40)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0007afba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 213
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2709 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2708
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2607 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229fe0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2603
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2707 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc001478010, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000095d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324cb00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001478040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007fe630, {0x320f1c0, 0xc001684330}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007fe630, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2608
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1782 [chan receive, 22 minutes]:
testing.(*T).Run(0xc00067e4e0, {0x258f0dc?, 0x0?}, 0xc00064ea00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00067e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00067e4e0, 0xc000928e00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1781
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 382 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229fe0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 335
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1794 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015fc680)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015fc680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0015fc680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc0015fc680, 0x2f12e60)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 598 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019bd080, 0xc001b28b60)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 541
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 368 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233700, 0xc000064310}, 0xc001317750, 0xc001351f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233700, 0xc000064310}, 0xa0?, 0xc001317750, 0xc001317798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233700?, 0xc000064310?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0008a2600?, 0xc0000650a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 367 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000686e50, 0x21)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001d6ed80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324cb00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000686e80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000682310, {0x320f1c0, 0xc0007ce4b0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000682310, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 441 [chan send, 71 minutes]:
os/exec.(*Cmd).watchCtx(0xc0008a3200, 0xc000065c00)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 440
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 754 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0xc0019bcf00, 0xc0019e6e00)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 322
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 822 [select, 69 minutes]:
net/http.(*persistConn).writeLoop(0xc0016a10e0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 819
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 821 [select, 69 minutes]:
net/http.(*persistConn).readLoop(0xc0016a10e0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 819
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3101 [IO wait]:
internal/poll.runtime_pollWait(0x7fd802744970, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001e01180?, 0xc000b54000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e01180, {0xc000b54000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc001e01180, {0xc000b54000?, 0x10?, 0xc0012c48a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000a90438, {0xc000b54000?, 0xc000b5405f?, 0x6f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001312198, {0xc000b54000?, 0x0?, 0xc001312198?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0007f49b8, {0x320f800, 0xc001312198})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0007f4708, {0x7fd800293040, 0xc001c7c300}, 0xc0012c4a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0007f4708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0007f4708, {0xc000b58000, 0x1000, 0xc001cf6a80?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001b16840, {0xc0018ee3c0, 0x9, 0x4560740?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x320dd20, 0xc001b16840}, {0xc0018ee3c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0018ee3c0, 0x9, 0x47b965?}, {0x320dd20?, 0xc001b16840?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0018ee380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0012c4fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001932600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3100
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1945 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0000fc340)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0000fc340, 0xc00064fe00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3029 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7fd8027456d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001cb6060?, 0xc000175a53?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001cb6060, {0xc000175a53, 0x5ad, 0x5ad})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000540318, {0xc000175a53?, 0x20d46a0?, 0x211?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0016fea80, {0x320db00, 0xc000a90330})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320dc80, 0xc0016fea80}, {0x320db00, 0xc000a90330}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000540318?, {0x320dc80, 0xc0016fea80})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000540318, {0x320dc80, 0xc0016fea80})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320dc80, 0xc0016fea80}, {0x320db80, 0xc000540318}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0015cc000?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3028
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2405 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008b9700, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1803 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0001956c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0001956c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0001956c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0001956c0, 0xc0001c4800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2316 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233700, 0xc000064310}, 0xc000097750, 0xc001536f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233700, 0xc000064310}, 0xf0?, 0xc000097750, 0xc000097798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233700?, 0xc000064310?}, 0xc0008a2300?, 0xc0012c6d10?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000977d0?, 0x593ba4?, 0xc001d10af0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2405
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2501 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2500
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1787 [chan receive, 16 minutes]:
testing.(*T).Run(0xc00067f6c0, {0x258f0dc?, 0x0?}, 0xc001d0c400)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00067f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00067f6c0, 0xc000929600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1781
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1802 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0001951e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0001951e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0001951e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0001951e0, 0xc0001c4780)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2404 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229fe0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1788 [chan receive, 6 minutes]:
testing.(*T).Run(0xc00067f860, {0x25b305f?, 0x6977205d31333a6f?}, 0xc001d0c200)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00067f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00067f860, 0xc00064ea00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1782
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1784 [chan receive, 6 minutes]:
testing.(*T).Run(0xc00067e820, {0x258f0dc?, 0x0?}, 0xc001d0c280)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00067e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00067e820, 0xc000929380)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1781
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1783 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00067e680)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00067e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00067e680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00067e680, 0xc000928f40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1781
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2324 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001548340, {0x25b305f?, 0xc000096d70?}, 0xc0012e2600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001548340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001548340, 0xc001d0c400)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1787
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3028 [syscall, 2 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x11, 0xc0000afb30, 0x4, 0xc000069050, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc000546b70?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001764000)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001764000)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0000fcd00, 0xc001764000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x32334f8, 0xc0001920e0}, 0xc0000fcd00, {0xc000580540, 0x1c}, {0x0?, 0xc0013b3f60?}, {0x559033?, 0x4b162f?}, {0xc0001cf300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0000fcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0000fcd00, 0xc0015cc000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2464
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3030 [IO wait]:
internal/poll.runtime_pollWait(0x7fd802744c88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001cb6120?, 0xc0013e651d?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001cb6120, {0xc0013e651d, 0x3ae3, 0x3ae3})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000540340, {0xc0013e651d?, 0x411b30?, 0x3e2b?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0016feab0, {0x320db00, 0xc0017080b8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320dc80, 0xc0016feab0}, {0x320db00, 0xc0017080b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000540340?, {0x320dc80, 0xc0016feab0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000540340, {0x320dc80, 0xc0016feab0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320dc80, 0xc0016feab0}, {0x320db80, 0xc000540340}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc00163e380?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3028
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2263 [chan receive, 2 minutes]:
testing.(*T).Run(0xc001548680, {0x25b305f?, 0xc000518d70?}, 0xc001e00300)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001548680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001548680, 0xc001d0c300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1785
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3069 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x32334f8, 0xc0004776c0}, {0x3227490, 0xc00198bd20}, 0x1, 0x0, 0xc00129dc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x32334f8?, 0xc00048a700?}, 0x3b9aca00, 0xc00129de10?, 0x1, 0xc00129dc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x32334f8, 0xc00048a700}, 0xc001548000, {0xc00005f770, 0x11}, {0x25ae3cf, 0x14}, {0x25c17e0, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x32334f8, 0xc00048a700}, 0xc001548000, {0xc00005f770, 0x11}, {0x2597c5f?, 0xc001d9f760?}, {0x559033?, 0x4b162f?}, {0xc00056e000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001548000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001548000, 0xc001e00300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2263
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3338 [IO wait]:
internal/poll.runtime_pollWait(0x7fd8027451b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0012e3400?, 0xc00165c000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0012e3400, {0xc00165c000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0012e3400, {0xc00165c000?, 0x9d68b2?, 0xc001d739a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000b78828, {0xc00165c000?, 0xc00187f540?, 0xc00165c05f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0013121f8, {0xc00165c000?, 0x0?, 0xc0013121f8?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0000382b8, {0x320f800, 0xc0013121f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000038008, {0x320ece0, 0xc000b78828}, 0xc001d73a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000038008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc000038008, {0xc00166d000, 0x1000, 0xc001cf6a80?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc00165bb00, {0xc0001cb700, 0x9, 0x4560740?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x320dd20, 0xc00165bb00}, {0xc0001cb700, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0001cb700, 0x9, 0x47b965?}, {0x320dd20?, 0xc00165bb00?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001cb6c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001d73fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001e05080)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3337
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1801 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0007af520)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007af520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007af520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0007af520, 0xc0001c4700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2317 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2316
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1944 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0000fc000)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000fc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000fc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0000fc000, 0xc00064fd80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1711 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0015fc1a0, {0x258dd97?, 0x559033?}, 0x2f13050)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0015fc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0015fc1a0, 0x2f12e58)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1843 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0015fc340, 0xc0013c0810)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1717
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2464 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0007afd40, {0x259982f?, 0xc00131d570?}, 0xc0015cc000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0007afd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0007afd40, 0xc001d0c280)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1784
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2457 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x32334f8, 0xc0004770a0}, {0x3227490, 0xc00085cf80}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x32334f8?, 0xc000680770?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x32334f8, 0xc000680770}, 0xc0007afa00, {0xc001302e88, 0x16}, {0x25ae3cf, 0x14}, {0x25c17e0, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x32334f8, 0xc000680770}, 0xc0007afa00, {0xc001302e88, 0x16}, {0x25a1d81?, 0xc001da1f60?}, {0x559033?, 0x4b162f?}, {0xc001e04600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0007afa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0007afa00, 0xc001d0c200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1788
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1720 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0007af380)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007af380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc0007af380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc0007af380, 0x2f12e28)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1781 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc00067e1a0, 0x2f13050)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1711
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1785 [chan receive, 18 minutes]:
testing.(*T).Run(0xc00067f380, {0x258f0dc?, 0x0?}, 0xc001d0c300)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00067f380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc00067f380, 0xc000929400)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1781
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1844 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0015fd040)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015fd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0015fd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0015fd040, 0xc001e00180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2315 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc0008b96d0, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001283d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324cb00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008b9700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b5ac00, {0x320f1c0, 0xc001d7e1e0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b5ac00, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2405
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1800 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0006a2690)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0007af1e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0007af1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0007af1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0007af1e0, 0xc0001c4680)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1843
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2459 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000686a40, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2457
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2458 [select, 2 minutes]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229fe0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2457
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2500 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233700, 0xc000064310}, 0xc0013dbf50, 0xc0013dbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233700, 0xc000064310}, 0xce?, 0xc0013dbf50, 0xc0013dbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233700?, 0xc000064310?}, 0xc0000fc4e0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013187d0?, 0x593ba4?, 0xc00064fe80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2459
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3031 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001764000, 0xc0013100e0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3028
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2608 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001478040, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2603
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                    

Test pass (153/202)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 3.72
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 84.12
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 70.14
29 TestCertExpiration 247.72
31 TestForceSystemdFlag 68.65
32 TestForceSystemdEnv 53.02
34 TestKVMDriverInstallOrUpdate 3.05
38 TestErrorSpam/setup 40.92
39 TestErrorSpam/start 0.34
40 TestErrorSpam/status 0.74
41 TestErrorSpam/pause 1.58
42 TestErrorSpam/unpause 1.88
43 TestErrorSpam/stop 5.6
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 85.1
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 29.03
50 TestFunctional/serial/KubeContext 0.05
51 TestFunctional/serial/KubectlGetPods 0.09
54 TestFunctional/serial/CacheCmd/cache/add_remote 4.05
55 TestFunctional/serial/CacheCmd/cache/add_local 1.51
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.77
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.1
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 395.73
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.19
66 TestFunctional/serial/LogsFileCmd 1.21
67 TestFunctional/serial/InvalidService 4.2
69 TestFunctional/parallel/ConfigCmd 0.33
70 TestFunctional/parallel/DashboardCmd 10.78
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.17
73 TestFunctional/parallel/StatusCmd 0.9
77 TestFunctional/parallel/ServiceCmdConnect 23.47
78 TestFunctional/parallel/AddonsCmd 0.15
79 TestFunctional/parallel/PersistentVolumeClaim 31.84
81 TestFunctional/parallel/SSHCmd 0.44
82 TestFunctional/parallel/CpCmd 1.31
83 TestFunctional/parallel/MySQL 28.67
84 TestFunctional/parallel/FileSync 0.21
85 TestFunctional/parallel/CertSync 1.32
89 TestFunctional/parallel/NodeLabels 0.07
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
93 TestFunctional/parallel/License 0.17
94 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
95 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
96 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
97 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
98 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
99 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
100 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
101 TestFunctional/parallel/ImageCommands/ImageBuild 3.21
102 TestFunctional/parallel/ImageCommands/Setup 0.98
103 TestFunctional/parallel/Version/short 0.05
104 TestFunctional/parallel/Version/components 0.57
105 TestFunctional/parallel/MountCmd/any-port 21.43
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.42
107 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.25
108 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.02
109 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.78
110 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
111 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.03
112 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.56
113 TestFunctional/parallel/ServiceCmd/DeployApp 7.39
114 TestFunctional/parallel/MountCmd/specific-port 1.6
115 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
117 TestFunctional/parallel/ServiceCmd/List 0.54
118 TestFunctional/parallel/ProfileCmd/profile_list 0.36
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.49
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
131 TestFunctional/parallel/ServiceCmd/Format 0.29
132 TestFunctional/parallel/ServiceCmd/URL 0.29
133 TestFunctional/delete_echo-server_images 0.03
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.02
139 TestMultiControlPlane/serial/StartCluster 201.46
140 TestMultiControlPlane/serial/DeployApp 4.98
141 TestMultiControlPlane/serial/PingHostFromPods 1.21
142 TestMultiControlPlane/serial/AddWorkerNode 57.06
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
145 TestMultiControlPlane/serial/CopyFile 13.06
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.97
161 TestJSONOutput/start/Command 85.52
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.74
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.65
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.35
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.19
189 TestMainNoArgs 0.04
190 TestMinikubeProfile 92.29
193 TestMountStart/serial/StartWithMountFirst 24.58
194 TestMountStart/serial/VerifyMountFirst 0.37
195 TestMountStart/serial/StartWithMountSecond 28.27
196 TestMountStart/serial/VerifyMountSecond 0.37
197 TestMountStart/serial/DeleteFirst 0.7
198 TestMountStart/serial/VerifyMountPostDelete 0.36
199 TestMountStart/serial/Stop 1.27
200 TestMountStart/serial/RestartStopped 23.77
201 TestMountStart/serial/VerifyMountPostStop 0.37
204 TestMultiNode/serial/FreshStart2Nodes 113.43
205 TestMultiNode/serial/DeployApp2Nodes 4.91
206 TestMultiNode/serial/PingHostFrom2Pods 0.77
207 TestMultiNode/serial/AddNode 51.07
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.57
210 TestMultiNode/serial/CopyFile 7.16
211 TestMultiNode/serial/StopNode 2.32
212 TestMultiNode/serial/StartAfterStop 38.71
214 TestMultiNode/serial/DeleteNode 2.24
216 TestMultiNode/serial/RestartMultiNode 203.55
217 TestMultiNode/serial/ValidateNameConflict 46.19
224 TestScheduledStopUnix 114.21
228 TestRunningBinaryUpgrade 227.77
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
243 TestNoKubernetes/serial/StartWithK8s 100.3
255 TestNoKubernetes/serial/StartWithStopK8s 42.6
256 TestNoKubernetes/serial/Start 28.28
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
258 TestNoKubernetes/serial/ProfileList 1.46
259 TestNoKubernetes/serial/Stop 1.41
260 TestNoKubernetes/serial/StartNoArgs 44.53
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
x
+
TestDownloadOnly/v1.20.0/json-events (10.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-064697 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-064697 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.319135253s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0930 10:20:34.785765   11009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0930 10:20:34.785874   11009 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-064697
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-064697: exit status 85 (56.113429ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-064697 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |          |
	|         | -p download-only-064697        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:24.506549   11021 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:24.506653   11021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:24.506661   11021 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:24.506665   11021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:24.506847   11021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	W0930 10:20:24.506966   11021 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19734-3842/.minikube/config/config.json: open /home/jenkins/minikube-integration/19734-3842/.minikube/config/config.json: no such file or directory
	I0930 10:20:24.507545   11021 out.go:352] Setting JSON to true
	I0930 10:20:24.508384   11021 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":171,"bootTime":1727691453,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:24.508495   11021 start.go:139] virtualization: kvm guest
	I0930 10:20:24.510795   11021 out.go:97] [download-only-064697] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0930 10:20:24.510921   11021 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:20:24.510965   11021 notify.go:220] Checking for updates...
	I0930 10:20:24.512419   11021 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:24.514049   11021 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:24.515554   11021 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 10:20:24.516874   11021 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 10:20:24.518178   11021 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0930 10:20:24.520613   11021 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:20:24.520849   11021 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:24.625367   11021 out.go:97] Using the kvm2 driver based on user configuration
	I0930 10:20:24.625396   11021 start.go:297] selected driver: kvm2
	I0930 10:20:24.625404   11021 start.go:901] validating driver "kvm2" against <nil>
	I0930 10:20:24.625877   11021 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:20:24.626017   11021 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19734-3842/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0930 10:20:24.641035   11021 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0930 10:20:24.641082   11021 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:24.641591   11021 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0930 10:20:24.641803   11021 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:20:24.641831   11021 cni.go:84] Creating CNI manager for ""
	I0930 10:20:24.641875   11021 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0930 10:20:24.641886   11021 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 10:20:24.641950   11021 start.go:340] cluster config:
	{Name:download-only-064697 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-064697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:20:24.642110   11021 iso.go:125] acquiring lock: {Name:mk95ca253939a7801a8ba4db8c15a1a3ab4169e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:20:24.643896   11021 out.go:97] Downloading VM boot image ...
	I0930 10:20:24.643934   11021 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0930 10:20:28.319680   11021 out.go:97] Starting "download-only-064697" primary control-plane node in "download-only-064697" cluster
	I0930 10:20:28.319701   11021 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 10:20:28.342572   11021 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 10:20:28.342603   11021 cache.go:56] Caching tarball of preloaded images
	I0930 10:20:28.342768   11021 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 10:20:28.344584   11021 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0930 10:20:28.344610   11021 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0930 10:20:28.375145   11021 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0930 10:20:33.332276   11021 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0930 10:20:33.332409   11021 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0930 10:20:34.244484   11021 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0930 10:20:34.244879   11021 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/download-only-064697/config.json ...
	I0930 10:20:34.244918   11021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/download-only-064697/config.json: {Name:mk14a3ec6381b3bebeb3f1cfa82db038994feeb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:20:34.245206   11021 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0930 10:20:34.245451   11021 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19734-3842/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-064697 host does not exist
	  To start a cluster, run: "minikube start -p download-only-064697"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-064697
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-301930 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-301930 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.715201094s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0930 10:20:38.823899   11009 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0930 10:20:38.823951   11009 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-3842/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-301930
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-301930: exit status 85 (57.222039ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-064697 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-064697        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p download-only-064697        | download-only-064697 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-301930 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-301930        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:35.147152   11229 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:35.147296   11229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:35.147307   11229 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:35.147312   11229 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:35.147489   11229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 10:20:35.148037   11229 out.go:352] Setting JSON to true
	I0930 10:20:35.148974   11229 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":182,"bootTime":1727691453,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:35.149092   11229 start.go:139] virtualization: kvm guest
	I0930 10:20:35.151332   11229 out.go:97] [download-only-301930] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 10:20:35.151469   11229 notify.go:220] Checking for updates...
	I0930 10:20:35.152937   11229 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:35.154301   11229 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:35.155671   11229 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 10:20:35.157085   11229 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 10:20:35.158416   11229 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-301930 host does not exist
	  To start a cluster, run: "minikube start -p download-only-301930"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-301930
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 10:20:39.387500   11009 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-950283 --alsologtostderr --binary-mirror http://127.0.0.1:41835 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-950283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-950283
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (84.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-641910 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-641910 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.069515628s)
helpers_test.go:175: Cleaning up "offline-crio-641910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-641910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-641910: (1.046194762s)
--- PASS: TestOffline (84.12s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-967811
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-967811: exit status 85 (51.674411ms)

                                                
                                                
-- stdout --
	* Profile "addons-967811" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-967811"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-967811
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-967811: exit status 85 (50.226648ms)

                                                
                                                
-- stdout --
	* Profile "addons-967811" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-967811"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (70.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-505847 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-505847 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m8.848590072s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-505847 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-505847 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-505847 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-505847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-505847
--- PASS: TestCertOptions (70.14s)

                                                
                                    
x
+
TestCertExpiration (247.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-836238 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-836238 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.355847123s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-836238 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-836238 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (23.383960784s)
helpers_test.go:175: Cleaning up "cert-expiration-836238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-836238
--- PASS: TestCertExpiration (247.72s)

                                                
                                    
x
+
TestForceSystemdFlag (68.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-687675 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-687675 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.007783155s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-687675 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-687675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-687675
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-687675: (1.436844508s)
--- PASS: TestForceSystemdFlag (68.65s)

                                                
                                    
x
+
TestForceSystemdEnv (53.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-498401 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-498401 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.105571619s)
helpers_test.go:175: Cleaning up "force-systemd-env-498401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-498401
--- PASS: TestForceSystemdEnv (53.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0930 12:01:17.379955   11009 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 12:01:17.380103   11009 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0930 12:01:17.409384   11009 install.go:62] docker-machine-driver-kvm2: exit status 1
W0930 12:01:17.409747   11009 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0930 12:01:17.409806   11009 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3104114793/001/docker-machine-driver-kvm2
I0930 12:01:17.680006   11009 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3104114793/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc000014d70 gz:0xc000014d78 tar:0xc000014ce0 tar.bz2:0xc000014d00 tar.gz:0xc000014d20 tar.xz:0xc000014d40 tar.zst:0xc000014d50 tbz2:0xc000014d00 tgz:0xc000014d20 txz:0xc000014d40 tzst:0xc000014d50 xz:0xc000014da0 zip:0xc000014dd0 zst:0xc000014da8] Getters:map[file:0xc00077d410 http:0xc00088cc30 https:0xc00088cc80] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0930 12:01:17.680066   11009 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3104114793/001/docker-machine-driver-kvm2
I0930 12:01:18.990306   11009 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0930 12:01:18.990403   11009 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0930 12:01:19.019270   11009 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0930 12:01:19.019303   11009 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0930 12:01:19.019365   11009 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0930 12:01:19.019391   11009 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3104114793/002/docker-machine-driver-kvm2
I0930 12:01:19.183209   11009 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3104114793/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc000014d70 gz:0xc000014d78 tar:0xc000014ce0 tar.bz2:0xc000014d00 tar.gz:0xc000014d20 tar.xz:0xc000014d40 tar.zst:0xc000014d50 tbz2:0xc000014d00 tgz:0xc000014d20 txz:0xc000014d40 tzst:0xc000014d50 xz:0xc000014da0 zip:0xc000014dd0 zst:0xc000014da8] Getters:map[file:0xc000515bb0 http:0xc00013abe0 https:0xc00013ac30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0930 12:01:19.183273   11009 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3104114793/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.05s)

                                                
                                    
x
+
TestErrorSpam/setup (40.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-592144 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-592144 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-592144 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-592144 --driver=kvm2  --container-runtime=crio: (40.921147244s)
--- PASS: TestErrorSpam/setup (40.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (5.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 stop: (2.39670063s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 stop: (1.310137362s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-592144 --log_dir /tmp/nospam-592144 stop: (1.895059099s)
--- PASS: TestErrorSpam/stop (5.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19734-3842/.minikube/files/etc/test/nested/copy/11009/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-020284 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-020284 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.100772549s)
--- PASS: TestFunctional/serial/StartWithProxy (85.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 11:02:57.919036   11009 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-020284 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-020284 --alsologtostderr -v=8: (29.030105286s)
functional_test.go:663: soft start took 29.030789443s for "functional-020284" cluster.
I0930 11:03:26.949483   11009 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-020284 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 cache add registry.k8s.io/pause:3.1: (1.274994984s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 cache add registry.k8s.io/pause:3.3: (1.421248812s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 cache add registry.k8s.io/pause:latest: (1.355603139s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-020284 /tmp/TestFunctionalserialCacheCmdcacheadd_local394922079/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cache add minikube-local-cache-test:functional-020284
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 cache add minikube-local-cache-test:functional-020284: (1.171321166s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cache delete minikube-local-cache-test:functional-020284
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-020284
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.839671ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 cache reload: (1.077268501s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 kubectl -- --context functional-020284 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-020284 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (395.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-020284 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-020284 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m35.729310438s)
functional_test.go:761: restart took 6m35.729438726s for "functional-020284" cluster.
I0930 11:10:10.748075   11009 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (395.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-020284 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 logs: (1.185810195s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 logs --file /tmp/TestFunctionalserialLogsFileCmd1494019046/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 logs --file /tmp/TestFunctionalserialLogsFileCmd1494019046/001/logs.txt: (1.209616168s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-020284 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-020284
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-020284: exit status 115 (273.14479ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.57:32630 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-020284 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 config get cpus: exit status 14 (45.612269ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 config get cpus: exit status 14 (49.044377ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-020284 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-020284 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26555: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-020284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-020284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.643995ms)

                                                
                                                
-- stdout --
	* [functional-020284] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:10:44.590649   26190 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:10:44.590877   26190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:10:44.590885   26190 out.go:358] Setting ErrFile to fd 2...
	I0930 11:10:44.590889   26190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:10:44.591041   26190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:10:44.591584   26190 out.go:352] Setting JSON to false
	I0930 11:10:44.592488   26190 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3192,"bootTime":1727691453,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:10:44.592587   26190 start.go:139] virtualization: kvm guest
	I0930 11:10:44.594875   26190 out.go:177] * [functional-020284] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0930 11:10:44.596168   26190 notify.go:220] Checking for updates...
	I0930 11:10:44.596184   26190 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:10:44.597451   26190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:10:44.598974   26190 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:10:44.600293   26190 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:10:44.601677   26190 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:10:44.603033   26190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:10:44.604698   26190 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:10:44.605266   26190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:10:44.605315   26190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:10:44.620765   26190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40599
	I0930 11:10:44.621193   26190 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:10:44.621736   26190 main.go:141] libmachine: Using API Version  1
	I0930 11:10:44.621760   26190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:10:44.622052   26190 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:10:44.622217   26190 main.go:141] libmachine: (functional-020284) Calling .DriverName
	I0930 11:10:44.622425   26190 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:10:44.622725   26190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:10:44.622763   26190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:10:44.639200   26190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0930 11:10:44.639759   26190 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:10:44.640257   26190 main.go:141] libmachine: Using API Version  1
	I0930 11:10:44.640283   26190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:10:44.640696   26190 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:10:44.640933   26190 main.go:141] libmachine: (functional-020284) Calling .DriverName
	I0930 11:10:44.676561   26190 out.go:177] * Using the kvm2 driver based on existing profile
	I0930 11:10:44.677807   26190 start.go:297] selected driver: kvm2
	I0930 11:10:44.677825   26190 start.go:901] validating driver "kvm2" against &{Name:functional-020284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-020284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:10:44.677951   26190 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:10:44.680150   26190 out.go:201] 
	W0930 11:10:44.681543   26190 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 11:10:44.682964   26190 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-020284 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-020284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-020284 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (166.054083ms)

                                                
                                                
-- stdout --
	* [functional-020284] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:10:43.530149   25919 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:10:43.530264   25919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:10:43.530274   25919 out.go:358] Setting ErrFile to fd 2...
	I0930 11:10:43.530279   25919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:10:43.530569   25919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:10:43.531142   25919 out.go:352] Setting JSON to false
	I0930 11:10:43.532062   25919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3190,"bootTime":1727691453,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 11:10:43.532154   25919 start.go:139] virtualization: kvm guest
	I0930 11:10:43.534496   25919 out.go:177] * [functional-020284] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0930 11:10:43.535841   25919 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 11:10:43.535838   25919 notify.go:220] Checking for updates...
	I0930 11:10:43.537337   25919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 11:10:43.538740   25919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	I0930 11:10:43.546147   25919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	I0930 11:10:43.547679   25919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 11:10:43.549368   25919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 11:10:43.551110   25919 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:10:43.551521   25919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:10:43.551580   25919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:10:43.570417   25919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I0930 11:10:43.570848   25919 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:10:43.571639   25919 main.go:141] libmachine: Using API Version  1
	I0930 11:10:43.571661   25919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:10:43.572081   25919 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:10:43.572331   25919 main.go:141] libmachine: (functional-020284) Calling .DriverName
	I0930 11:10:43.572588   25919 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 11:10:43.573018   25919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:10:43.573063   25919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:10:43.591253   25919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0930 11:10:43.591850   25919 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:10:43.592354   25919 main.go:141] libmachine: Using API Version  1
	I0930 11:10:43.592371   25919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:10:43.592715   25919 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:10:43.592885   25919 main.go:141] libmachine: (functional-020284) Calling .DriverName
	I0930 11:10:43.630986   25919 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0930 11:10:43.632355   25919 start.go:297] selected driver: kvm2
	I0930 11:10:43.632373   25919 start.go:901] validating driver "kvm2" against &{Name:functional-020284 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-020284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 11:10:43.632490   25919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 11:10:43.638236   25919 out.go:201] 
	W0930 11:10:43.639737   25919 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 11:10:43.641237   25919 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-020284 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-020284 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bcxv4" [fb190ebd-bad4-4b7e-89c2-54a1203747e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-bcxv4" [fb190ebd-bad4-4b7e-89c2-54a1203747e1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.003960388s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.57:30507
functional_test.go:1675: http://192.168.39.57:30507: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-bcxv4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.57:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.57:30507
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [15334c8e-4532-40a5-b787-c739ea70b58a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004246076s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-020284 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-020284 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-020284 get pvc myclaim -o=json
I0930 11:10:50.306842   11009 retry.go:31] will retry after 1.86938772s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:0bd86687-e70c-450b-ad4d-4eb2b90c2ca7 ResourceVersion:612 Generation:0 CreationTimestamp:2024-09-30 11:10:50 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0008983b0 VolumeMode:0xc0008983d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-020284 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-020284 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [afad3dfa-760b-470b-a272-46ea2e760a09] Pending
helpers_test.go:344: "sp-pod" [afad3dfa-760b-470b-a272-46ea2e760a09] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/09/30 11:10:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [afad3dfa-760b-470b-a272-46ea2e760a09] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004469311s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-020284 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-020284 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-020284 delete -f testdata/storage-provisioner/pod.yaml: (1.15637651s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-020284 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [120b08ba-777e-4b05-96b5-ab5a7ecec84f] Pending
helpers_test.go:344: "sp-pod" [120b08ba-777e-4b05-96b5-ab5a7ecec84f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [120b08ba-777e-4b05-96b5-ab5a7ecec84f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004362482s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-020284 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh -n functional-020284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cp functional-020284:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2200406581/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh -n functional-020284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh -n functional-020284 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-020284 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-l8jgz" [6249ae12-b6ec-4326-a388-446d94cff6ed] Pending
helpers_test.go:344: "mysql-6cdb49bbb-l8jgz" [6249ae12-b6ec-4326-a388-446d94cff6ed] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-l8jgz" [6249ae12-b6ec-4326-a388-446d94cff6ed] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.00410425s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-020284 exec mysql-6cdb49bbb-l8jgz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-020284 exec mysql-6cdb49bbb-l8jgz -- mysql -ppassword -e "show databases;": exit status 1 (209.506234ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 11:10:41.277696   11009 retry.go:31] will retry after 1.131589759s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-020284 exec mysql-6cdb49bbb-l8jgz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-020284 exec mysql-6cdb49bbb-l8jgz -- mysql -ppassword -e "show databases;": exit status 1 (139.028557ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 11:10:42.549600   11009 retry.go:31] will retry after 1.110435559s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-020284 exec mysql-6cdb49bbb-l8jgz -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-020284 exec mysql-6cdb49bbb-l8jgz -- mysql -ppassword -e "show databases;": exit status 1 (325.691917ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 11:10:43.986810   11009 retry.go:31] will retry after 2.26089674s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-020284 exec mysql-6cdb49bbb-l8jgz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11009/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo cat /etc/test/nested/copy/11009/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11009.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo cat /etc/ssl/certs/11009.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11009.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo cat /usr/share/ca-certificates/11009.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/110092.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo cat /etc/ssl/certs/110092.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/110092.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo cat /usr/share/ca-certificates/110092.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-020284 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh "sudo systemctl is-active docker": exit status 1 (222.359636ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh "sudo systemctl is-active containerd": exit status 1 (199.722141ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-020284 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-020284
localhost/kicbase/echo-server:functional-020284
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-020284 image ls --format short --alsologtostderr:
I0930 11:10:45.980178   26446 out.go:345] Setting OutFile to fd 1 ...
I0930 11:10:45.980450   26446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:45.980461   26446 out.go:358] Setting ErrFile to fd 2...
I0930 11:10:45.980468   26446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:45.980682   26446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
I0930 11:10:45.981277   26446 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:45.981392   26446 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:45.981803   26446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:45.981847   26446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:45.996701   26446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
I0930 11:10:45.997208   26446 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:45.997915   26446 main.go:141] libmachine: Using API Version  1
I0930 11:10:45.997948   26446 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:45.998312   26446 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:45.998517   26446 main.go:141] libmachine: (functional-020284) Calling .GetState
I0930 11:10:46.000433   26446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.000466   26446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.017879   26446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33077
I0930 11:10:46.018382   26446 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.018841   26446 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.018865   26446 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.019158   26446 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.019339   26446 main.go:141] libmachine: (functional-020284) Calling .DriverName
I0930 11:10:46.019549   26446 ssh_runner.go:195] Run: systemctl --version
I0930 11:10:46.019570   26446 main.go:141] libmachine: (functional-020284) Calling .GetSSHHostname
I0930 11:10:46.022264   26446 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.022687   26446 main.go:141] libmachine: (functional-020284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:33:b0", ip: ""} in network mk-functional-020284: {Iface:virbr1 ExpiryTime:2024-09-30 12:01:48 +0000 UTC Type:0 Mac:52:54:00:08:33:b0 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-020284 Clientid:01:52:54:00:08:33:b0}
I0930 11:10:46.022715   26446 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined IP address 192.168.39.57 and MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.022891   26446 main.go:141] libmachine: (functional-020284) Calling .GetSSHPort
I0930 11:10:46.023081   26446 main.go:141] libmachine: (functional-020284) Calling .GetSSHKeyPath
I0930 11:10:46.023254   26446 main.go:141] libmachine: (functional-020284) Calling .GetSSHUsername
I0930 11:10:46.023393   26446 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/functional-020284/id_rsa Username:docker}
I0930 11:10:46.106811   26446 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 11:10:46.174711   26446 main.go:141] libmachine: Making call to close driver server
I0930 11:10:46.174728   26446 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:46.175040   26446 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:46.175063   26446 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
I0930 11:10:46.175092   26446 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:46.175108   26446 main.go:141] libmachine: Making call to close driver server
I0930 11:10:46.175116   26446 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:46.175327   26446 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:46.175348   26446 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-020284 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/kicbase/echo-server           | functional-020284  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-020284  | dd7cfa5e81456 | 3.33kB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-020284 image ls --format table --alsologtostderr:
I0930 11:10:46.896282   26591 out.go:345] Setting OutFile to fd 1 ...
I0930 11:10:46.896417   26591 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.896426   26591 out.go:358] Setting ErrFile to fd 2...
I0930 11:10:46.896431   26591 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.896593   26591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
I0930 11:10:46.897168   26591 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.897273   26591 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.897669   26591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.897711   26591 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.913293   26591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
I0930 11:10:46.913865   26591 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.914572   26591 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.914599   26591 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.914987   26591 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.915163   26591 main.go:141] libmachine: (functional-020284) Calling .GetState
I0930 11:10:46.917342   26591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.917391   26591 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.932954   26591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42023
I0930 11:10:46.933591   26591 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.934266   26591 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.934316   26591 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.934666   26591 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.934874   26591 main.go:141] libmachine: (functional-020284) Calling .DriverName
I0930 11:10:46.935071   26591 ssh_runner.go:195] Run: systemctl --version
I0930 11:10:46.935098   26591 main.go:141] libmachine: (functional-020284) Calling .GetSSHHostname
I0930 11:10:46.938039   26591 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.938497   26591 main.go:141] libmachine: (functional-020284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:33:b0", ip: ""} in network mk-functional-020284: {Iface:virbr1 ExpiryTime:2024-09-30 12:01:48 +0000 UTC Type:0 Mac:52:54:00:08:33:b0 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-020284 Clientid:01:52:54:00:08:33:b0}
I0930 11:10:46.938531   26591 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined IP address 192.168.39.57 and MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.938630   26591 main.go:141] libmachine: (functional-020284) Calling .GetSSHPort
I0930 11:10:46.938810   26591 main.go:141] libmachine: (functional-020284) Calling .GetSSHKeyPath
I0930 11:10:46.938973   26591 main.go:141] libmachine: (functional-020284) Calling .GetSSHUsername
I0930 11:10:46.939206   26591 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/functional-020284/id_rsa Username:docker}
I0930 11:10:47.054268   26591 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 11:10:47.141280   26591 main.go:141] libmachine: Making call to close driver server
I0930 11:10:47.141310   26591 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:47.141666   26591 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
I0930 11:10:47.141724   26591 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:47.141736   26591 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:47.141748   26591 main.go:141] libmachine: Making call to close driver server
I0930 11:10:47.141757   26591 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:47.141975   26591 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:47.141994   26591 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:47.142015   26591 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-020284 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"dd7cfa5e81456b3512241b1752c4efab2e4c3dfeae887a2d6522f8226a1c06da","repoDigests":["localhost/minikube-local-cache-test@sha256:de8773bb7243d3e1b21e0f0f7309e78b0daf4b9c9602fdcfcc27574367b9776b"],"repoTags":["localhost/minikube-local-cache-test:functional-020284"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserve
r@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927
b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-020284"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab
5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8
s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab9
89956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-020284 image ls --format json --alsologtostderr:
I0930 11:10:46.563369   26525 out.go:345] Setting OutFile to fd 1 ...
I0930 11:10:46.563505   26525 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.563518   26525 out.go:358] Setting ErrFile to fd 2...
I0930 11:10:46.563525   26525 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.563723   26525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
I0930 11:10:46.564514   26525 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.564663   26525 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.565233   26525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.565286   26525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.581158   26525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40003
I0930 11:10:46.581754   26525 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.582429   26525 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.582454   26525 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.582828   26525 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.583015   26525 main.go:141] libmachine: (functional-020284) Calling .GetState
I0930 11:10:46.585267   26525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.585307   26525 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.602026   26525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42711
I0930 11:10:46.602410   26525 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.602873   26525 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.602896   26525 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.603298   26525 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.603520   26525 main.go:141] libmachine: (functional-020284) Calling .DriverName
I0930 11:10:46.603738   26525 ssh_runner.go:195] Run: systemctl --version
I0930 11:10:46.603764   26525 main.go:141] libmachine: (functional-020284) Calling .GetSSHHostname
I0930 11:10:46.606903   26525 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.607317   26525 main.go:141] libmachine: (functional-020284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:33:b0", ip: ""} in network mk-functional-020284: {Iface:virbr1 ExpiryTime:2024-09-30 12:01:48 +0000 UTC Type:0 Mac:52:54:00:08:33:b0 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-020284 Clientid:01:52:54:00:08:33:b0}
I0930 11:10:46.607345   26525 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined IP address 192.168.39.57 and MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.607492   26525 main.go:141] libmachine: (functional-020284) Calling .GetSSHPort
I0930 11:10:46.607639   26525 main.go:141] libmachine: (functional-020284) Calling .GetSSHKeyPath
I0930 11:10:46.607773   26525 main.go:141] libmachine: (functional-020284) Calling .GetSSHUsername
I0930 11:10:46.607875   26525 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/functional-020284/id_rsa Username:docker}
I0930 11:10:46.747600   26525 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 11:10:46.839572   26525 main.go:141] libmachine: Making call to close driver server
I0930 11:10:46.839588   26525 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:46.839868   26525 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:46.839888   26525 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:46.839905   26525 main.go:141] libmachine: Making call to close driver server
I0930 11:10:46.839873   26525 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
I0930 11:10:46.839913   26525 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:46.840166   26525 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:46.840180   26525 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:46.840203   26525 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-020284 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: dd7cfa5e81456b3512241b1752c4efab2e4c3dfeae887a2d6522f8226a1c06da
repoDigests:
- localhost/minikube-local-cache-test@sha256:de8773bb7243d3e1b21e0f0f7309e78b0daf4b9c9602fdcfcc27574367b9776b
repoTags:
- localhost/minikube-local-cache-test:functional-020284
size: "3330"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-020284
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-020284 image ls --format yaml --alsologtostderr:
I0930 11:10:46.220732   26469 out.go:345] Setting OutFile to fd 1 ...
I0930 11:10:46.220864   26469 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.220876   26469 out.go:358] Setting ErrFile to fd 2...
I0930 11:10:46.220882   26469 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.221073   26469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
I0930 11:10:46.221817   26469 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.221958   26469 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.222366   26469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.222420   26469 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.238083   26469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41115
I0930 11:10:46.238677   26469 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.239297   26469 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.239339   26469 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.239734   26469 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.239932   26469 main.go:141] libmachine: (functional-020284) Calling .GetState
I0930 11:10:46.242156   26469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.242210   26469 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.260848   26469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
I0930 11:10:46.261337   26469 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.261950   26469 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.261984   26469 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.262374   26469 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.262582   26469 main.go:141] libmachine: (functional-020284) Calling .DriverName
I0930 11:10:46.262792   26469 ssh_runner.go:195] Run: systemctl --version
I0930 11:10:46.262826   26469 main.go:141] libmachine: (functional-020284) Calling .GetSSHHostname
I0930 11:10:46.266447   26469 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.266947   26469 main.go:141] libmachine: (functional-020284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:33:b0", ip: ""} in network mk-functional-020284: {Iface:virbr1 ExpiryTime:2024-09-30 12:01:48 +0000 UTC Type:0 Mac:52:54:00:08:33:b0 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-020284 Clientid:01:52:54:00:08:33:b0}
I0930 11:10:46.266972   26469 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined IP address 192.168.39.57 and MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.267123   26469 main.go:141] libmachine: (functional-020284) Calling .GetSSHPort
I0930 11:10:46.267318   26469 main.go:141] libmachine: (functional-020284) Calling .GetSSHKeyPath
I0930 11:10:46.267490   26469 main.go:141] libmachine: (functional-020284) Calling .GetSSHUsername
I0930 11:10:46.267629   26469 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/functional-020284/id_rsa Username:docker}
I0930 11:10:46.348721   26469 ssh_runner.go:195] Run: sudo crictl images --output json
I0930 11:10:46.446831   26469 main.go:141] libmachine: Making call to close driver server
I0930 11:10:46.446848   26469 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:46.447112   26469 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
I0930 11:10:46.447117   26469 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:46.447146   26469 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:46.447156   26469 main.go:141] libmachine: Making call to close driver server
I0930 11:10:46.447184   26469 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:46.447550   26469 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:46.447562   26469 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:46.447614   26469 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh pgrep buildkitd: exit status 1 (229.723802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image build -t localhost/my-image:functional-020284 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 image build -t localhost/my-image:functional-020284 testdata/build --alsologtostderr: (2.689478993s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-020284 image build -t localhost/my-image:functional-020284 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 55e3c45af9b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-020284
--> e870d7bb5d1
Successfully tagged localhost/my-image:functional-020284
e870d7bb5d1a61768ee3b3e3c59588ab9a44e9e2903c1f304eb889829d65f5eb
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-020284 image build -t localhost/my-image:functional-020284 testdata/build --alsologtostderr:
I0930 11:10:46.724601   26568 out.go:345] Setting OutFile to fd 1 ...
I0930 11:10:46.724733   26568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.724741   26568 out.go:358] Setting ErrFile to fd 2...
I0930 11:10:46.724746   26568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 11:10:46.724927   26568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
I0930 11:10:46.725528   26568 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.726074   26568 config.go:182] Loaded profile config "functional-020284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0930 11:10:46.726442   26568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.726495   26568 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.742123   26568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44955
I0930 11:10:46.742687   26568 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.743346   26568 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.743376   26568 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.743914   26568 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.744141   26568 main.go:141] libmachine: (functional-020284) Calling .GetState
I0930 11:10:46.746312   26568 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0930 11:10:46.746405   26568 main.go:141] libmachine: Launching plugin server for driver kvm2
I0930 11:10:46.762061   26568 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43193
I0930 11:10:46.762538   26568 main.go:141] libmachine: () Calling .GetVersion
I0930 11:10:46.763125   26568 main.go:141] libmachine: Using API Version  1
I0930 11:10:46.763168   26568 main.go:141] libmachine: () Calling .SetConfigRaw
I0930 11:10:46.763542   26568 main.go:141] libmachine: () Calling .GetMachineName
I0930 11:10:46.763746   26568 main.go:141] libmachine: (functional-020284) Calling .DriverName
I0930 11:10:46.763946   26568 ssh_runner.go:195] Run: systemctl --version
I0930 11:10:46.763988   26568 main.go:141] libmachine: (functional-020284) Calling .GetSSHHostname
I0930 11:10:46.766970   26568 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.767396   26568 main.go:141] libmachine: (functional-020284) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:33:b0", ip: ""} in network mk-functional-020284: {Iface:virbr1 ExpiryTime:2024-09-30 12:01:48 +0000 UTC Type:0 Mac:52:54:00:08:33:b0 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-020284 Clientid:01:52:54:00:08:33:b0}
I0930 11:10:46.767428   26568 main.go:141] libmachine: (functional-020284) DBG | domain functional-020284 has defined IP address 192.168.39.57 and MAC address 52:54:00:08:33:b0 in network mk-functional-020284
I0930 11:10:46.767550   26568 main.go:141] libmachine: (functional-020284) Calling .GetSSHPort
I0930 11:10:46.767715   26568 main.go:141] libmachine: (functional-020284) Calling .GetSSHKeyPath
I0930 11:10:46.767874   26568 main.go:141] libmachine: (functional-020284) Calling .GetSSHUsername
I0930 11:10:46.768010   26568 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/functional-020284/id_rsa Username:docker}
I0930 11:10:46.887111   26568 build_images.go:161] Building image from path: /tmp/build.47763807.tar
I0930 11:10:46.887192   26568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0930 11:10:46.922033   26568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.47763807.tar
I0930 11:10:46.934037   26568 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.47763807.tar: stat -c "%s %y" /var/lib/minikube/build/build.47763807.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.47763807.tar': No such file or directory
I0930 11:10:46.934074   26568 ssh_runner.go:362] scp /tmp/build.47763807.tar --> /var/lib/minikube/build/build.47763807.tar (3072 bytes)
I0930 11:10:47.042034   26568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.47763807
I0930 11:10:47.069446   26568 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.47763807 -xf /var/lib/minikube/build/build.47763807.tar
I0930 11:10:47.089552   26568 crio.go:315] Building image: /var/lib/minikube/build/build.47763807
I0930 11:10:47.089630   26568 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-020284 /var/lib/minikube/build/build.47763807 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0930 11:10:49.329431   26568 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-020284 /var/lib/minikube/build/build.47763807 --cgroup-manager=cgroupfs: (2.23977414s)
I0930 11:10:49.329489   26568 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.47763807
I0930 11:10:49.350436   26568 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.47763807.tar
I0930 11:10:49.365634   26568 build_images.go:217] Built localhost/my-image:functional-020284 from /tmp/build.47763807.tar
I0930 11:10:49.365671   26568 build_images.go:133] succeeded building to: functional-020284
I0930 11:10:49.365676   26568 build_images.go:134] failed building to: 
I0930 11:10:49.365696   26568 main.go:141] libmachine: Making call to close driver server
I0930 11:10:49.365703   26568 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:49.365963   26568 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:49.365979   26568 main.go:141] libmachine: Making call to close connection to plugin binary
I0930 11:10:49.365986   26568 main.go:141] libmachine: Making call to close driver server
I0930 11:10:49.365994   26568 main.go:141] libmachine: (functional-020284) Calling .Close
I0930 11:10:49.366255   26568 main.go:141] libmachine: (functional-020284) DBG | Closing plugin on server side
I0930 11:10:49.366311   26568 main.go:141] libmachine: Successfully made call to close driver server
I0930 11:10:49.366336   26568 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-020284
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdany-port413413360/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727694618894680607" to /tmp/TestFunctionalparallelMountCmdany-port413413360/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727694618894680607" to /tmp/TestFunctionalparallelMountCmdany-port413413360/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727694618894680607" to /tmp/TestFunctionalparallelMountCmdany-port413413360/001/test-1727694618894680607
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (207.698312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 11:10:19.102695   11009 retry.go:31] will retry after 343.347366ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 30 11:10 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 30 11:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 30 11:10 test-1727694618894680607
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh cat /mount-9p/test-1727694618894680607
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-020284 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cabfc92f-4a38-4949-a60a-97ea4a6f5c95] Pending
helpers_test.go:344: "busybox-mount" [cabfc92f-4a38-4949-a60a-97ea4a6f5c95] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cabfc92f-4a38-4949-a60a-97ea4a6f5c95] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cabfc92f-4a38-4949-a60a-97ea4a6f5c95] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.004203413s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-020284 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdany-port413413360/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image load --daemon kicbase/echo-server:functional-020284 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 image load --daemon kicbase/echo-server:functional-020284 --alsologtostderr: (1.907716845s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image load --daemon kicbase/echo-server:functional-020284 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-020284
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image load --daemon kicbase/echo-server:functional-020284 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 image load --daemon kicbase/echo-server:functional-020284 --alsologtostderr: (3.362928189s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image save kicbase/echo-server:functional-020284 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 image save kicbase/echo-server:functional-020284 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.780444055s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image rm kicbase/echo-server:functional-020284 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.036518301s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-020284
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 image save --daemon kicbase/echo-server:functional-020284 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-020284 image save --daemon kicbase/echo-server:functional-020284 --alsologtostderr: (1.52924898s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-020284
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-020284 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-020284 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2txh6" [ff9c5416-2183-4b4a-a4f3-16adcce495a6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2txh6" [ff9c5416-2183-4b4a-a4f3-16adcce495a6] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.005939918s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdspecific-port506275099/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (192.528339ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 11:10:40.514016   11009 retry.go:31] will retry after 408.638561ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdspecific-port506275099/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh "sudo umount -f /mount-9p": exit status 1 (182.755394ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-020284 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdspecific-port506275099/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248338223/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248338223/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248338223/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T" /mount1: exit status 1 (215.148111ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 11:10:42.139501   11009 retry.go:31] will retry after 511.885799ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-020284 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248338223/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248338223/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-020284 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2248338223/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "306.264086ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.978724ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 service list -o json
functional_test.go:1494: Took "494.495316ms" to run "out/minikube-linux-amd64 -p functional-020284 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "452.756634ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.71555ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.57:31355
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-020284 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.57:31355
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-020284
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-020284
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-020284
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-033260 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-033260 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.780610465s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-033260 -- rollout status deployment/busybox: (2.718996454s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-748nr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-nbhwc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-rkczc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-748nr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-nbhwc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-rkczc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-748nr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-nbhwc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-rkczc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-748nr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-748nr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-nbhwc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-nbhwc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-rkczc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-033260 -- exec busybox-7dff88458-rkczc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-033260 -v=7 --alsologtostderr
E0930 11:15:18.063892   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:18.070333   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:18.081785   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:18.103314   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:18.144810   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:18.226321   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:18.387932   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:18.710176   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:19.351689   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:20.633831   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:23.195204   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:28.317558   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:15:38.559537   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-033260 -v=7 --alsologtostderr: (56.196746263s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-033260 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp testdata/cp-test.txt ha-033260:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260:/home/docker/cp-test.txt ha-033260-m02:/home/docker/cp-test_ha-033260_ha-033260-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test_ha-033260_ha-033260-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260:/home/docker/cp-test.txt ha-033260-m03:/home/docker/cp-test_ha-033260_ha-033260-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test_ha-033260_ha-033260-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260:/home/docker/cp-test.txt ha-033260-m04:/home/docker/cp-test_ha-033260_ha-033260-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test_ha-033260_ha-033260-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp testdata/cp-test.txt ha-033260-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m02:/home/docker/cp-test.txt ha-033260:/home/docker/cp-test_ha-033260-m02_ha-033260.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test_ha-033260-m02_ha-033260.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m02:/home/docker/cp-test.txt ha-033260-m03:/home/docker/cp-test_ha-033260-m02_ha-033260-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test_ha-033260-m02_ha-033260-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m02:/home/docker/cp-test.txt ha-033260-m04:/home/docker/cp-test_ha-033260-m02_ha-033260-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test_ha-033260-m02_ha-033260-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp testdata/cp-test.txt ha-033260-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt ha-033260:/home/docker/cp-test_ha-033260-m03_ha-033260.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test_ha-033260-m03_ha-033260.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt ha-033260-m02:/home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test_ha-033260-m03_ha-033260-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m03:/home/docker/cp-test.txt ha-033260-m04:/home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test_ha-033260-m03_ha-033260-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp testdata/cp-test.txt ha-033260-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3040784691/001/cp-test_ha-033260-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt ha-033260:/home/docker/cp-test_ha-033260-m04_ha-033260.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260 "sudo cat /home/docker/cp-test_ha-033260-m04_ha-033260.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt ha-033260-m02:/home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m02 "sudo cat /home/docker/cp-test_ha-033260-m04_ha-033260-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 cp ha-033260-m04:/home/docker/cp-test.txt ha-033260-m03:/home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-033260 ssh -n ha-033260-m03 "sudo cat /home/docker/cp-test_ha-033260-m04_ha-033260-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.970730852s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-279181 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0930 11:35:18.064551   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-279181 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m25.523545309s)
--- PASS: TestJSONOutput/start/Command (85.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-279181 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-279181 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-279181 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-279181 --output=json --user=testUser: (7.354581131s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-337150 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-337150 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.950343ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b965fab8-e6e6-4581-ab52-aa1ae05d2d43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-337150] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e21a5c0-5b75-4de9-b478-a1e83ca3307a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"346fcade-e89c-4daa-813e-c0c7f12320d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8c1672ca-128a-4535-b4c6-e60f42a293c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig"}}
	{"specversion":"1.0","id":"ebd101d9-d1eb-401c-88a5-276a00ffedf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube"}}
	{"specversion":"1.0","id":"a16f8424-84aa-4d51-8aa2-9e917279ce23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5d35b790-44d3-4a69-a8c4-c6a863dac332","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"95fb7173-ad9d-41a0-b192-f9f3467d3584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-337150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-337150
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (92.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-408928 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-408928 --driver=kvm2  --container-runtime=crio: (44.122568145s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-423353 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-423353 --driver=kvm2  --container-runtime=crio: (45.410577799s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-408928
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-423353
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-423353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-423353
helpers_test.go:175: Cleaning up "first-408928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-408928
--- PASS: TestMinikubeProfile (92.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-588336 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-588336 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.58008692s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-588336 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-588336 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-603642 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-603642 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.271208627s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603642 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603642 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-588336 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603642 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603642 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-603642
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-603642: (1.267602762s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-603642
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-603642: (22.767678899s)
--- PASS: TestMountStart/serial/RestartStopped (23.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603642 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603642 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-457103 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0930 11:40:18.064250   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-457103 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.022677681s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-457103 -- rollout status deployment/busybox: (3.294285519s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-hwwdc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-xrkzl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-hwwdc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-xrkzl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-hwwdc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-xrkzl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-hwwdc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-hwwdc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-xrkzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-457103 -- exec busybox-7dff88458-xrkzl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-457103 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-457103 -v 3 --alsologtostderr: (50.509978058s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-457103 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp testdata/cp-test.txt multinode-457103:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile377977775/001/cp-test_multinode-457103.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103:/home/docker/cp-test.txt multinode-457103-m02:/home/docker/cp-test_multinode-457103_multinode-457103-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m02 "sudo cat /home/docker/cp-test_multinode-457103_multinode-457103-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103:/home/docker/cp-test.txt multinode-457103-m03:/home/docker/cp-test_multinode-457103_multinode-457103-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m03 "sudo cat /home/docker/cp-test_multinode-457103_multinode-457103-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp testdata/cp-test.txt multinode-457103-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile377977775/001/cp-test_multinode-457103-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt multinode-457103:/home/docker/cp-test_multinode-457103-m02_multinode-457103.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103 "sudo cat /home/docker/cp-test_multinode-457103-m02_multinode-457103.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103-m02:/home/docker/cp-test.txt multinode-457103-m03:/home/docker/cp-test_multinode-457103-m02_multinode-457103-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m03 "sudo cat /home/docker/cp-test_multinode-457103-m02_multinode-457103-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp testdata/cp-test.txt multinode-457103-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile377977775/001/cp-test_multinode-457103-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt multinode-457103:/home/docker/cp-test_multinode-457103-m03_multinode-457103.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103 "sudo cat /home/docker/cp-test_multinode-457103-m03_multinode-457103.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 cp multinode-457103-m03:/home/docker/cp-test.txt multinode-457103-m02:/home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 ssh -n multinode-457103-m02 "sudo cat /home/docker/cp-test_multinode-457103-m03_multinode-457103-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-457103 node stop m03: (1.470354885s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-457103 status: exit status 7 (424.205534ms)

                                                
                                                
-- stdout --
	multinode-457103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-457103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-457103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr: exit status 7 (421.695377ms)

                                                
                                                
-- stdout --
	multinode-457103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-457103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-457103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:42:15.671753   44540 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:42:15.671855   44540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:42:15.671866   44540 out.go:358] Setting ErrFile to fd 2...
	I0930 11:42:15.671872   44540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:42:15.672065   44540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3842/.minikube/bin
	I0930 11:42:15.672252   44540 out.go:352] Setting JSON to false
	I0930 11:42:15.672282   44540 mustload.go:65] Loading cluster: multinode-457103
	I0930 11:42:15.672393   44540 notify.go:220] Checking for updates...
	I0930 11:42:15.672779   44540 config.go:182] Loaded profile config "multinode-457103": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0930 11:42:15.672800   44540 status.go:174] checking status of multinode-457103 ...
	I0930 11:42:15.673289   44540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:42:15.673342   44540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:42:15.689845   44540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I0930 11:42:15.690286   44540 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:42:15.690905   44540 main.go:141] libmachine: Using API Version  1
	I0930 11:42:15.690927   44540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:42:15.691320   44540 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:42:15.691548   44540 main.go:141] libmachine: (multinode-457103) Calling .GetState
	I0930 11:42:15.693062   44540 status.go:364] multinode-457103 host status = "Running" (err=<nil>)
	I0930 11:42:15.693077   44540 host.go:66] Checking if "multinode-457103" exists ...
	I0930 11:42:15.693402   44540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:42:15.693444   44540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:42:15.708789   44540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45451
	I0930 11:42:15.709238   44540 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:42:15.709734   44540 main.go:141] libmachine: Using API Version  1
	I0930 11:42:15.709755   44540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:42:15.710032   44540 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:42:15.710202   44540 main.go:141] libmachine: (multinode-457103) Calling .GetIP
	I0930 11:42:15.712639   44540 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:42:15.713066   44540 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:42:15.713093   44540 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:42:15.713186   44540 host.go:66] Checking if "multinode-457103" exists ...
	I0930 11:42:15.713464   44540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:42:15.713510   44540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:42:15.728454   44540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
	I0930 11:42:15.728847   44540 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:42:15.729308   44540 main.go:141] libmachine: Using API Version  1
	I0930 11:42:15.729325   44540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:42:15.729706   44540 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:42:15.729917   44540 main.go:141] libmachine: (multinode-457103) Calling .DriverName
	I0930 11:42:15.730111   44540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:42:15.730151   44540 main.go:141] libmachine: (multinode-457103) Calling .GetSSHHostname
	I0930 11:42:15.732929   44540 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:42:15.733414   44540 main.go:141] libmachine: (multinode-457103) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:78:f2", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:39:30 +0000 UTC Type:0 Mac:52:54:00:75:78:f2 Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:multinode-457103 Clientid:01:52:54:00:75:78:f2}
	I0930 11:42:15.733431   44540 main.go:141] libmachine: (multinode-457103) DBG | domain multinode-457103 has defined IP address 192.168.39.219 and MAC address 52:54:00:75:78:f2 in network mk-multinode-457103
	I0930 11:42:15.733674   44540 main.go:141] libmachine: (multinode-457103) Calling .GetSSHPort
	I0930 11:42:15.733825   44540 main.go:141] libmachine: (multinode-457103) Calling .GetSSHKeyPath
	I0930 11:42:15.733972   44540 main.go:141] libmachine: (multinode-457103) Calling .GetSSHUsername
	I0930 11:42:15.734123   44540 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103/id_rsa Username:docker}
	I0930 11:42:15.813731   44540 ssh_runner.go:195] Run: systemctl --version
	I0930 11:42:15.820651   44540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:42:15.836304   44540 kubeconfig.go:125] found "multinode-457103" server: "https://192.168.39.219:8443"
	I0930 11:42:15.836339   44540 api_server.go:166] Checking apiserver status ...
	I0930 11:42:15.836372   44540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 11:42:15.850976   44540 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup
	W0930 11:42:15.862767   44540 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1113/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0930 11:42:15.862824   44540 ssh_runner.go:195] Run: ls
	I0930 11:42:15.868002   44540 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8443/healthz ...
	I0930 11:42:15.872206   44540 api_server.go:279] https://192.168.39.219:8443/healthz returned 200:
	ok
	I0930 11:42:15.872236   44540 status.go:456] multinode-457103 apiserver status = Running (err=<nil>)
	I0930 11:42:15.872247   44540 status.go:176] multinode-457103 status: &{Name:multinode-457103 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:42:15.872273   44540 status.go:174] checking status of multinode-457103-m02 ...
	I0930 11:42:15.872612   44540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:42:15.872654   44540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:42:15.888161   44540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0930 11:42:15.888701   44540 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:42:15.889236   44540 main.go:141] libmachine: Using API Version  1
	I0930 11:42:15.889267   44540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:42:15.889652   44540 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:42:15.889849   44540 main.go:141] libmachine: (multinode-457103-m02) Calling .GetState
	I0930 11:42:15.891408   44540 status.go:364] multinode-457103-m02 host status = "Running" (err=<nil>)
	I0930 11:42:15.891422   44540 host.go:66] Checking if "multinode-457103-m02" exists ...
	I0930 11:42:15.891705   44540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:42:15.891749   44540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:42:15.907426   44540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0930 11:42:15.907864   44540 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:42:15.908352   44540 main.go:141] libmachine: Using API Version  1
	I0930 11:42:15.908374   44540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:42:15.908743   44540 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:42:15.908908   44540 main.go:141] libmachine: (multinode-457103-m02) Calling .GetIP
	I0930 11:42:15.912016   44540 main.go:141] libmachine: (multinode-457103-m02) DBG | domain multinode-457103-m02 has defined MAC address 52:54:00:80:e8:70 in network mk-multinode-457103
	I0930 11:42:15.912566   44540 main.go:141] libmachine: (multinode-457103-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e8:70", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:40:35 +0000 UTC Type:0 Mac:52:54:00:80:e8:70 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-457103-m02 Clientid:01:52:54:00:80:e8:70}
	I0930 11:42:15.912594   44540 main.go:141] libmachine: (multinode-457103-m02) DBG | domain multinode-457103-m02 has defined IP address 192.168.39.180 and MAC address 52:54:00:80:e8:70 in network mk-multinode-457103
	I0930 11:42:15.912770   44540 host.go:66] Checking if "multinode-457103-m02" exists ...
	I0930 11:42:15.913060   44540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:42:15.913093   44540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:42:15.928255   44540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38963
	I0930 11:42:15.928697   44540 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:42:15.929186   44540 main.go:141] libmachine: Using API Version  1
	I0930 11:42:15.929202   44540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:42:15.929437   44540 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:42:15.929590   44540 main.go:141] libmachine: (multinode-457103-m02) Calling .DriverName
	I0930 11:42:15.929776   44540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 11:42:15.929794   44540 main.go:141] libmachine: (multinode-457103-m02) Calling .GetSSHHostname
	I0930 11:42:15.932455   44540 main.go:141] libmachine: (multinode-457103-m02) DBG | domain multinode-457103-m02 has defined MAC address 52:54:00:80:e8:70 in network mk-multinode-457103
	I0930 11:42:15.932856   44540 main.go:141] libmachine: (multinode-457103-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:e8:70", ip: ""} in network mk-multinode-457103: {Iface:virbr1 ExpiryTime:2024-09-30 12:40:35 +0000 UTC Type:0 Mac:52:54:00:80:e8:70 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-457103-m02 Clientid:01:52:54:00:80:e8:70}
	I0930 11:42:15.932896   44540 main.go:141] libmachine: (multinode-457103-m02) DBG | domain multinode-457103-m02 has defined IP address 192.168.39.180 and MAC address 52:54:00:80:e8:70 in network mk-multinode-457103
	I0930 11:42:15.933058   44540 main.go:141] libmachine: (multinode-457103-m02) Calling .GetSSHPort
	I0930 11:42:15.933235   44540 main.go:141] libmachine: (multinode-457103-m02) Calling .GetSSHKeyPath
	I0930 11:42:15.933386   44540 main.go:141] libmachine: (multinode-457103-m02) Calling .GetSSHUsername
	I0930 11:42:15.933518   44540 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19734-3842/.minikube/machines/multinode-457103-m02/id_rsa Username:docker}
	I0930 11:42:16.016743   44540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 11:42:16.031028   44540 status.go:176] multinode-457103-m02 status: &{Name:multinode-457103-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:42:16.031069   44540 status.go:174] checking status of multinode-457103-m03 ...
	I0930 11:42:16.031427   44540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0930 11:42:16.031473   44540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0930 11:42:16.046562   44540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0930 11:42:16.046967   44540 main.go:141] libmachine: () Calling .GetVersion
	I0930 11:42:16.047504   44540 main.go:141] libmachine: Using API Version  1
	I0930 11:42:16.047523   44540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0930 11:42:16.047808   44540 main.go:141] libmachine: () Calling .GetMachineName
	I0930 11:42:16.048015   44540 main.go:141] libmachine: (multinode-457103-m03) Calling .GetState
	I0930 11:42:16.049604   44540 status.go:364] multinode-457103-m03 host status = "Stopped" (err=<nil>)
	I0930 11:42:16.049635   44540 status.go:377] host is not running, skipping remaining checks
	I0930 11:42:16.049643   44540 status.go:176] multinode-457103-m03 status: &{Name:multinode-457103-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-457103 node start m03 -v=7 --alsologtostderr: (38.090066101s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 node delete m03
E0930 11:48:21.132048   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-457103 node delete m03: (1.726834161s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (203.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-457103 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-457103 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m23.028720915s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-457103 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (203.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-457103
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-457103-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-457103-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.522782ms)

                                                
                                                
-- stdout --
	* [multinode-457103-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-457103-m02' is duplicated with machine name 'multinode-457103-m02' in profile 'multinode-457103'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-457103-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-457103-m03 --driver=kvm2  --container-runtime=crio: (44.884045321s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-457103
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-457103: exit status 80 (216.77236ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-457103 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-457103-m03 already exists in multinode-457103-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-457103-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.19s)

                                                
                                    
x
+
TestScheduledStopUnix (114.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-857979 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-857979 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.616952046s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857979 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-857979 -n scheduled-stop-857979
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857979 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0930 11:58:30.222345   11009 retry.go:31] will retry after 62.984µs: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.223513   11009 retry.go:31] will retry after 156.882µs: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.224656   11009 retry.go:31] will retry after 198.415µs: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.225778   11009 retry.go:31] will retry after 373.623µs: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.226912   11009 retry.go:31] will retry after 425.682µs: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.228030   11009 retry.go:31] will retry after 846.202µs: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.229166   11009 retry.go:31] will retry after 1.235274ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.231370   11009 retry.go:31] will retry after 2.266847ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.234580   11009 retry.go:31] will retry after 1.533248ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.236785   11009 retry.go:31] will retry after 2.694156ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.239977   11009 retry.go:31] will retry after 7.239672ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.248203   11009 retry.go:31] will retry after 7.095993ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.255396   11009 retry.go:31] will retry after 18.032046ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.273605   11009 retry.go:31] will retry after 19.322009ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
I0930 11:58:30.293903   11009 retry.go:31] will retry after 29.541754ms: open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/scheduled-stop-857979/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857979 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857979 -n scheduled-stop-857979
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-857979
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-857979 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-857979
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-857979: exit status 7 (65.231689ms)

                                                
                                                
-- stdout --
	scheduled-stop-857979
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857979 -n scheduled-stop-857979
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-857979 -n scheduled-stop-857979: exit status 7 (63.887747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-857979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-857979
--- PASS: TestScheduledStopUnix (114.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (227.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.423519609 start -p running-upgrade-850581 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0930 12:00:18.063811   11009 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-3842/.minikube/profiles/functional-020284/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.423519609 start -p running-upgrade-850581 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m11.703452309s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-850581 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-850581 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.247746229s)
helpers_test.go:175: Cleaning up "running-upgrade-850581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-850581
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-850581: (1.158343355s)
--- PASS: TestRunningBinaryUpgrade (227.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791924 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-791924 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.864007ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-791924] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3842/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3842/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791924 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791924 --driver=kvm2  --container-runtime=crio: (1m40.053456486s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-791924 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791924 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791924 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.260619263s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-791924 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-791924 status -o json: exit status 2 (246.262821ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-791924","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-791924
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-791924: (1.088425489s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791924 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791924 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.280291416s)
--- PASS: TestNoKubernetes/serial/Start (28.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-791924 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-791924 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.650319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-791924
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-791924: (1.405417545s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791924 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791924 --driver=kvm2  --container-runtime=crio: (44.534626515s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-791924 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-791924 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.536523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    

Test skip (32/202)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard